Blog: Peter Lammersma
Enhancing the provided toolset
The new Uniface 10 IDE (Integrated Development Environment) offers a well-equipped toolbox It provides nearly everything a developer needs to build and maintain software applications. But sometimes you want a bit more or need to add a personal touch to the tools provided.
Every developer uses several tools and utilities to do his/her Uniface work effectively.
These are the ones I can’t do without:
- Uniface 10 to build web and mobile applications.
- Notepad++ is the editor for all files I don’t edit with Uniface. It is based on the powerful Scintilla editor.
- To monitor all communications to and from other applications I use Wireshark. This is also perfect for monitoring network traffic.
- In addition to Wireshark I use a very old tool called Nettool. This tool is easy to use and perfect for creating an http tunnel.
- To monitor processes on my computer, Sysinternals Process Explorer is indispensable. The developer of this tool, Mark Russinovich, has created a lot of nice tools. If you don’t know about these, just check them out!
- A database management tool – the choice depends on the database being used. During development I use SQLite studio, since the repository is stored in a SQLite database.
- GIT and Sourcetree to interface between my local environment and Gitlab. A modern developer can’t do without version control.
Most of the time, my work is done in the Uniface IDE. Adding extra utilities was and is very useful. Like most developers, I build my own Uniface utilities. These utilities are components built in Uniface that let you perform actions in the IDE.
The functionality of the utilities I use today is very close to what I have been using for the past few years. In previous versions of Uniface, the additional menu was the place to store additional functionality. Uniface 10 offers two places: User Defined Menus and User Defined Worksheets. The first adds a menu entry in the action or global menu, while the second adds a worksheet to one of the editors.
Using the User Defined Menus or Worksheets is very easy, and powerful at the same time. To use the additional tools I already had in the User Defined Worksheets – the solution I like most – only a few changes were needed, and then I was able to utilise the enhanced functionality Uniface 10 offers. This one-time change is really easy and certainly worthwhile. Your own tools can be integrated into Uniface 10 seamlessly.
I’ll now describe two tools I built a couple of years ago and use now in Uniface 10.
My version control interface
The utility I use every day is a basic version control interface. It consists of a Uniface form to create an export of the object I am working on. With the User Defined Worksheets solution, this form has become an integral part of my IDE. The form with the additional functionality is now (with Uniface 10) a tab in the editor of the object.
Before starting a modification to an object – a component for instance – I open this worksheet and it checks if I have the latest version of the object I am about to change. After my work is done, I open this worksheet again to save the latest version of the object to a file on my computer. The User Defined Worksheet has a default operation, ws_refresh, that is fired when the object is changed. This local copy is created by the $UDE function of Uniface. With the external tools GIT and SourceTree, local versions of the objects are stored in Gitlab.
I have made this additional tool available on every editor in the IDE. Version control is important regardless of the type of component.
My entity utility
Another tool I use adds value in the entity editor. The flexibility of the User Defined Worksheet solution in Uniface 10 allows me to add tooling to specific editors in the IDE. This entity utility provides additional information and actions in the entity editor of the IDE.
It adds a new tab to the IDE to show me where the entity, which is opened in the editor, is used, and allows me to perform actions on that particular entity. Besides an overview of all the relationships this entity has, it shows all the components on which the entity (or a subtype of it) is used. It offers functionality to open related entities and components. It also compiles all related components.
Figure 1 An example of my Entity Utilities worksheet
Information about the Uniface repository can be read in the meta dictionary – nothing new about that. What is new in Uniface10 is that information about the current object (as opened in the current instance of the Uniface editor) is available in the U-Bar. The value of the U-Bar is a parameter of the default operation ws_activate. Opening a component in the corresponding editor can be achieved by calling the operation navigateTo in the IDE APIcomponent IDE_API_1.
Wrapping it up
Uniface makes it very easy to add functionality to the IDE. It is enough to have a Uniface resources file and a logical path in the assignment file to enhance the IDE. Use it to add your own tools or to utilise those of others. That last option is very interesting 😊. I wonder what tools you have created and use?
This is still something I would really love to see added to Uniface: a library to share sources, frameworks, samples, tools and other add-ons. It could even be an add-on to the IDE itself.
On Uniface.info there is already a section called ‘Uniface utilities, add ons and extras’. If you have any Uniface 10 additional tooling you want to share, please send it to email@example.com and the Uniface Support team will be happy to add it. I look forward your input!”
DSP JS API function:
Putting application developers in control of the presentation layer
HTML5 already provides a powerful set of form controls out of the box, and its functionality is continuing to grow and mature. To get access to all that functionality, application developers need to be able to interface directly with the controls. Uniface 10.3 Dynamic Server Pages provides exactly that capability.
Before we go into detail, let’s see how Uniface’s support for web technology has evolved over time to give developers more and more control over their applications.
The beginning: Uniface Server Pages
Uniface 7.2 provided our first functionality for the web: The Static Server Page, also known as the Uniface Server Page (USP). USPs enable binding between the Uniface server-side data structure and an HTML client side. They handle communication between client and server in a very simple way: the server generates an HTML page complete with data substituted into the HTML, and the browser simply displays that HTML page. Updates are initiated by the browser via a standard HTML submit, after which the server will load the updates and reconnect them with the back end. After that, the server again generates a full-page response with all changes handled, and the whole process starts again from the beginning.
A leap forward: Dynamic Server Pages
Uniface 9.4 introduced the Dynamic Server Page (DSP) allowing Uniface developers to create rich internet applications (RIAs). The biggest difference between a USP and a DSP is the absence of full-page refreshes in the DSP. Obviously, data is still sent to the server and received back by the client, but instead of the whole page being refreshed, only the affected data is returned and merged into the page displayed in the browser. All communication is handled by the Uniface DSP and programming is as easy as writing some ProcScript in a trigger. In addition, Uniface 9.4 DSPs provide out-of-the-box client-side syntax checking, which, in case of a syntax error, avoids a round trip to the server. A DSP consists of a server part and a client part. The client part has a copy of the component definition, which is what allows the client to perform syntax checking.
Introducing the ability to manipulate client data
Application developers take charge of the presentation layer
With Uniface 10.3, we have now also opened up the presentation layer of the client: the Document Object Model (DOM) layer. Using a simple function, a Uniface data object can now get a reference to its bound element in the DOM layer, allowing Uniface developers to access DOM elements in the context of that field, its occurrences, and its instance. The function is:
uniface.field.getBoundElement(ViewId) From the bound DOM element, it is possible to navigate to sibling and parent elements. In case of an AttributesOnly field, the same technique can be used to navigate to child elements. This gives Uniface developers full control of the DOM, allowing integration of third-party JS libraries that integrate at DOM level.
In the following code example we will use the new JS function to change the default reporting of client side syntax errors. The webtrigger onSyntaxError is the trigger that gets fired the moment the client encounters a syntax error. The default way for Uniface to respond in this situation is to set an error class on the element bound to the field that is in error. CSS would then style it appropriately. The code below overwrites the default behavior and sets the error class to the parent element of the element bound to the field:
The getBouldElement() function is simple to use and provides full access to the DOM layer of the browser. It opens up communication with any JS technology that needs to interface on the presentation layer.
Blog by Jason Huggins UNIFACE SECURITY The latest releases of Uniface 9 and 10 mark a significant milestone in the enhancement of security, both under the covers along with new functionality to secure applications. I believe that in practice all organizations need to protect aspects of business confidentially, competitive edge, adhere to applicable privacy regulations and prevent data theft/manipulation. Protecting data is paramount for practically everyone. It can feel like the wild west at times, with attacks coming from all directions, for example an employee, a contractor/visitor, a cyber-criminal, malware/ransomware, accidental hackers, curious observers… the list goes on! Whether the data breach is internal, external, malicious or accidental, the risk should be understood, assessed and addressed. The statistics of count, size and cost to a victim show that global data breaches have been on a continual increase each year. The current average cost of breaches is in the millions of dollars range, with total costs per year globally in the billions. Breach sizes have ranged from tens of millions of confidential records up to many billions of lines of data. Predictions suggest that a clear majority of enterprise traffic will be encrypted throughout 2019. It is important for Uniface to support this, whilst making it an easy as part of the development and deployment platform to utilize.
What is the threat?
There are many threats to data security for which network security exposes a key flaw. There is an inherent weakness in the standard TCP/IP network model and IPv4 because none of the layers include security features as such. The protocols ensure very reliable transmission of data however do not fully ensure data integrity or confidentially. It is extremely easy to sniff and tamper with the data in real time. But wait, what about IPv6 you may ask? Well IPsec, a security protocol, is built into the IPv6 standard however it is not mandatory. The official launch of IPv6 was in 2012 however IPv4 is still the dominant protocol covering around 75% of deployments. The rate of adoption appears to be slowing however this does not in any way mean that IPv6 will not become the dominant standard, it may just take a little longer than expected. IPSec within IPv6 will not necessarily become the drop-in solution to the security hole. It is still valid to apply alternative or additional mechanisms to secure the transmitted data. The Uniface implementation means that the application can with ease, reliably ensure encryption is applied whatever the underlying IPv’N’ network infrastructure implementation and protocol support may be.
What’s new in network security?
Uniface now has a cryptography layer added to its network stack. The implementation is a TLS layer built on top of the standard TCP driver. The TCP driver itself has been refactored yielding several improvements. The new TLS driver utilises OpenSSL libraries. OpenSSL, often referred to as the ‘Swiss Army Knife’ of cryptography, is well maintained/supported, has excellent platform coverage and is backed by major organizations. It implements both Pre-shared key (PSK) and Asymmetric Certificate/Key pair verification, the later providing greater levels of security. The cryptography methods supported, called ciphers, are those of OpenSSL, however by default Uniface will only list the stronger ciphers. The new driver encrypts the network traffic, including IPv6, between Uniface processes encompassing both shared and exclusive servers. A key feature supported by the TLS driver is ‘Peer Name Verification’, which helps mitigate compromises such as ‘Man in the Middle’ attacks. Configuration is very straight forward matching the typical driver approach, with familiar mnemonics such as ‘tls:’ & ‘USYS$TLS_PARAMS’. The configuration and various possibilities are well documented in the help.
Security is a shared responsibility spanning development and operations. Being more of configuration exercise, developers will see little change. The extra processing needed to encrypt/decrypt may have an influence e.g. transaction size and client vs server processing could become a consideration. Note: Uniface benchmarks match the published OpenSSL results. Operations should understand security, TLS and encryption, ensuring to pick ciphers that adhere to internal policies whilst maximising performance. The ‘pathscrambler’ is essential and must be used to safeguard the TLS configuration settings. The TLS driver is simple to use and should be considered an essential priority for most.
Global Objects There are many types of Global Objects, like Messages, Global Procs and Keyboard Translation Tables, to name a few. The Uniface 9 IDF and the Uniface 10 IDE provide editors to maintain the definitions of those objects. You could consider those as the definition-time source objects. Successful compilation of those source objects results in compiled objects, their run-time counterparts. The compiled objects are static objects. User applications can use them, but they have no way of changing them. To change them, you must use the editors in the Uniface development environment (the version 9 IDF or the version 10 IDE) to change their source objects, then compile those and make the resulting compiled objects available to the application.
Counter – the oddball
There is one particular type of Global Object that is different from the others: the Counter. Contrary to other Global Objects, Counters are not static run-time objects. Any application can change them through ProcScript. The numset statement creates a counter or re-initializes its value, and the numgen statement increases a counter’s value. Considering this, you may even consider Counters as run-time data rather than run-time objects. To maintain Counters, Uniface 9 offers the Define Counter dialog. This dialog gives the impression that, like for other Global Objects, it maintains source objects. However, it does not. In fact, there are no source objects for Counters. They only exist as run-time objects, in UOBJ.TEXT. The Define Counter dialog acts directly on those run-time objects. If your application uses Counters, be aware of these aspects, and apply extra care when deploying UOBJ.TEXT. Also, users of the Define Counter dialog might just accidentally change the value of a Counter that is being used by an application.
Migrating Counters to Uniface 10
Uniface 10 is straightforward: it simply regards Counters as user data. The one and only way to change them is through the ProcScript instructions that are intended for just that purpose: numset and numgen. There is no dialog that can be used to adjust Counter values. If you already initialize and maintain your application’s Counters purely by means of ProcScript logic, there is no extra effort involved for the migration of Counters to version 10. This logic will continue to work as it did in version 9. If, on the other hand, you use the IDF’s Define Counter dialog to initialize and maintain your application’s Counters, you will need to act. To achieve the same functionality in version 10, you must implement that logic yourself, using the available ProcScript instructions. Also, you will need to apply the same care as you did in version 9, to make sure that UOBJ.TEXT is properly distributed and/or installed. This example auto-increments a counter. If it does not exist yet, it creates it and gives it an initial value: ; Auto-increment counter numgen "MYCOUNTER", 1, "MYLIB" if ($status == <UPROCERR_COUNTER>) ; Counter does not exist. ; Create it and give it an initial value of 1001. numset "MYCOUNTER", 1001, "MYLIB" if ($status < 0) message "Error generating sequence number." rollback "$UUU" done else SEQNO = 1001 commit "$UUU" endif else SEQNO = $result commit "$UUU" endif Blog: Frank Doodeman Frank is a Software Architect at Uniface.
In case you’ve missed the summer’s exciting news from Uniface headquarters, Uniface 10.3 has now arrived. I’ve already been working with this new version for a while, initially using a couple of pre-releases, but then for the past few weeks the live release. This experience has convinced me that Uniface 10, and version 10.3 in particular, is the version the Uniface community has been waiting for. I’m writing this blog post to explain why, and especially to share my experiences with the new IDE. Background: Uniface 10 Uniface 10 was designed and built based on the wishes of the Uniface developer community. Hundreds of questions and requests from Uniface developers all over the world were taken into account during this extensive design exercise. The result is a complete overhaul of the Uniface development environment. Uniface 10 has a whole new look and feel, comparable with any modern IDE. Although it’s still recognizably Uniface, developers may need a little time to get used to the new version, but in my experience that will be time well spent. There’s no way I’m going back to Uniface 9! A major difference from earlier versions is that Uniface 10 is a non-modal development environment, which means you can work with as many Uniface objects as you like in parallel. Being able to switch between components with just one click makes development easier and more efficient. This by itself is a great reason to start using Uniface 10. Highlights of Uniface 10 Here are some of the enhancements that you’ll notice immediately when you start using Uniface 10 for the first time:
- The IDE’s performance has significantly improved, making the non-modal concept a pleasure to work with.
- The graphical form painter functionality is drastically improved – a strong argument for client/server developers to switch to Uniface 10.
- Debugging is faster: every error and warning message in the compiler output contains a hyperlink to the relevant line of code.
- There’s a completely updated and stable meta-dictionary so developers can safely port their existing custom-written utilities to Uniface 10. The additional menu items in previous Uniface versions can be used to launch these utilities.
- Uniface 10 now also has user-defined menus and user-defined worksheets. My experience shows these are very powerful. Yes, you might need to modify your tools, but again it’s worthwhile.
- The new Transport Layer Security (TLS) network connector makes the network connection between client and server secure – vital for business-critical applications.
I’ll discuss many of these enhancements in more detail in future posts. As well as all these major improvements, Uniface 10 brings some smaller “nice to haves”. For example, I’m pleased to have the option to set the title bar text of the IDE application window. Migrating to Uniface The migration process from Uniface 9.7 to Uniface 10 has been run by the Uniface team over and over again. Many huge Uniface 9 customer applications have been migrated successfully to Uniface 10. So for those currently on Uniface 9.6 or 9.7, migration is likely to be a smooth process. If, on the other hand, you are currently considering migrating to Uniface 9.7.05, my advice would be to move directly to Uniface 10 instead because of the advantages described above and (This is also Uniface’s advice) it means one migration rather than two and ensures long-term support. Conclusion: based on my experience, I believe Uniface 10.3 is the version to go for. Blog: Peter Lammersma Peter Lammersma is an entrepreneur and IT and business consultant. Peter works extensively with Uniface 10. As a long-serving member of the Uniface community, he’s kindly agreed to give his independent perspective in this blog series.
Over the years many Uniface developers have created tools on top of the Uniface Repository.
One tool that has been made by many, is one that looks for "dirty" objects: objects that were modified after they were last compiled successfully.
In Uniface 9 such a tool would have been based on comparing the fields UTIMESTAMP (modification date of the source) and UCOMPSTAMP (compilation date of the source) of various Uniface Repository tables.
In Uniface 10 this has changed, mainly to align the repository with the export format that has been optimized for version management:
- The modification date of a development object source is only found in the main object. And yes, it is also updated when you change a sub-object. So if you change a modeled field, the UTIMESTAMP of the entity is updated.
- The compilation date of a development object is no longer an attribute of the source code. It does not have much value to know when the source was last compiled, if you can't be sure that the compiled object was the result of that compilation. Someone may have copied a different compiled object to the output folder. The only real compilation date is that of the compiled object (file on disk or in a UAR).
Uniface 10.3 is the first release of Uniface 10 that is shipped with meta definitions: the new DICT model is published. So now you can re-engineer the tools that you made for Uniface 9.
In order to make it easier to (re)write a tool, the $ude("exist") function has been enhanced to return the attributes of the compiled object (file on disk or in a UAR) such as the modification date.
Compiling objects because their source code has changed
It is not just components that require compilation. There are 14 types of development object that require compilation and generate a file in your resources. I have attached a sample tool that checks whether these objects are "dirty" and therefore require compilation.
The tool gathers the source modification date from main development objects, and the compilation date of the compiled objects. In some cases, one main development object (such as a message library) results in many compiled objects (messages).
The tool uses $ude("exist") to check the compilation timestamp of the compiled object and $ude("compile") to compile it.
The attached export file contains a full project export, so when you open project WIZ_COMPILE, you will see the whole thing.
You can download the export here: [download id="7581"] And here is a file with some test data for each object type: [download id="7585"]
You will need the Uniface 10.3 DICT model to compile the tool. The new DICT model for Uniface 10.3 is delivered in umeta.xml in the uniface\misc folder of your development installation.
PLEASE NOTE : The sample tool does NOT take into account that a component may require compilation because a modeled entity or an IncludeScript has changed. See below.
Compiling components because a modeled entity has changed
Please note that the attached sample does NOT check if a component requires compilation because a modeled entity has changed. If you had this check in your Uniface 9 tooling, you also need to implement it in your new tooling.
A Uniface 9 based example for this issue can be found here: http://theunifaceuniverse.blogspot.nl/2011/04/change-entity-compile-forms.html You would need to simplify this for Uniface 10 because the modification date is only on the main development object.
Compiling objects because an IncludeScript has changed
Please note that the attached sample does NOT check if a development object requires compilation because an IncludeScript has changed.
To implement this is quite tricky, as you would have to find the #INCLUDES within the IncludeScript code, and handle the fact that they can be nested. To correctly parse all of that might not be much faster than just compiling everything...
Focus on the right stuff
For a long time ‘mobile first’ was our software developers paradigm. Every new application should not only take the mobile user into account but also focus on mobile use as the primary device. Nowadays, Artificial Intelligence, (AI) is also a subject matter. But what does it mean for developers, and what happened to mobile?
AI first, mobile second?
Mobile is not forgotten. The ‘mobile first’ paradigm was necessary to make the step from desktop to mobile and adapt features of mobile devices. Since mobile devices were put in first place it’s common to design and build from a mobile perspective. Mobile is the new normal, just like desktop was in the 90’s and web (still on desktop btw) in the zero’s. Artificial intelligence (AI) is not new. This in contrast to mobile. Twenty years ago we could only dream about the mobile revolution. What we find normal these days most visionaries didn’t predict a decade ago. But AI is something from the mid 50’s. And from far before that. It’s been in your minds for generations, machines behaving like humans.
Artificial General intelligence (AGI)
There are roughly two categories of AI: applied or general. The last one, Artificial General Intelligence, is used for the general purpose systems that more or less behave like the human brain. These systems can be even better (whatever that might be) than the Human General Intelligence. This is what we think about when we talk about AI. But it’s elusive. Have you seen the movie Her? About a man falling love with his Operating System. This is what we think AI is or should be. Thinking like a human, but without the disadvantages of the human brain (the need for sleep for instance). But it is also the image that scares us most, isn’t it. Computers and robots taking over and making us, humans, superfluous. Sometimes it/IT looks like magic. Do you remember this magician David Copperfield making the Statue of Liberty disappear? We all knew it was an illusion (although we didn’t know how he did it). In 18’s century an automated chess player was invented. A machine that could play chess. Turns out, there was a tiny chess player inside this machine. In the 90’s the development of neural networks where very popular. Computers programmed to behave and learn like the human brain. Finally we had real AI! Very promising, but about a decade later we learned about big data. Why predict the future if you can calculate it?! Google could predict the flu based on the search result. That’s what going on these days. And that’s what makes IT an interesting playground for Uniface.
Artificial Applied Intelligence (AAI)
Systems that replicate specific human behavior or intelligence. It varies from old fashioned fuzzy logic (like the controller of your central heating system and the PLC’s the control the traffic lights in your city) to Machine Learning (you wished the traffic lights were controlled by). It all might look like it’s a kind of intelligence, but most of the time it’s something the developer more or less created. But the reaction started by a certain action is depending on previous results: “Last time you, the user, where satisfied when I did this after you did that, so I am going to do exactly the same.” There is nothing magical about that. That’s combining data, computing power and a bit commonsense. An example where I wish the developer did use AAI. On my phone I have a public transport app. When I type the name of the street where I want to travel to, it shows me all the cities with this street. Of course I can start typing the name of the city, but I want the system to know where I want to go. Since I always use this system within my own city. I expect the system to learn how I use it and show the street in the city where I am on top of the list.
Big data and sensors
Tesla knows how to use AAI. Their autopilot it mostly depending on AAI. Every time something unexpectedly happens the car communicates this to a centralized system. The system learns by comparing the specific situation, the performed actions and the results of these actions. In fact, it’s relatively easy. Only thing the system has to do is decide if the current course is safe. Constantly monitoring all real-time input. On the internet you can find videos about Tesla’s predicting a collision and taking proper actions to prevent it. The autopilot stopped the car, as it should. From a human perspective not a big deal, this is what our brain is doing constantly while we are awake (and even in our sleep). And that is exactly what AAI is all about, replicate a specific part of the human intelligence.
Most ‘old fashioned’ developers and probably even the organizations they work for, still want build software that is hardcoded van A to Z. That makes the development process manageable and testable. Software that is supporting the business processes of yesterday. Nowadays users expect software to think with them. Software that supports their wishes and demands of tomorrow. Within a few years they expect their systems to think for them! In modern software development AI must kept in mind. Not every situation can be programmed nor tested. It is not a developer thinking about every possible situation. Software is more then a long list of ‘if then’ statements; it’s less. All it takes, is a database with all possible situations and actions. Every new situation is added as soon as it occurs, updating this system on every possible occasion. The heart of the system consists of algorithms that determine which action is the best option given a certain situation. This is how a chess playing computer beats a human grandmaster without cheating like the machine mentioned above: by playing (and winning and loosing) over and over again and learning from it. And this is how a robot learns itself to walk: by walking, and falling and standing up over and over again. Another example where I want to have more intelligence is my calendar. When I have an appointment I want my calendar software to tell me when I should leave to be on time. Based on my current location, the means of transportation, the traffic, my behavior (I walk fast, but leave always just too late), etc. And I want the software to warn me when a new appointment endangers my schedule for that day.
What makes a programming language suitable for AAI purposes? • AAI is about data. Some of the data is static and stored in a database. With Uniface we can build data intensive applications. That’s where Uniface’s is designed for. It’s technology independent, scalable and very stable.The Uniface programming language is optimized for reading data from and storing it into every common database system. • AAI is about using sensors. Not all data is (relatively speaking) static, some is realtime from sensors or user input. The Progressive Web Apps built with Uniface can use every hardware feature on mobile devices. And Uniface can even be installed on devices like Raspberry Pi and use every sensor attached to the system. • AAI is also about user input. Uniface supports a wide range of user devices. From the old fashioned desktops to mobile apps on a smartphone. • AAI is about computing power. Applications build in Uniface can be deployed on every mainstream OS. The coding is interpreted efficiently. • AAI is about building clever algorithms. Developers don’t have to worry about OS and database specifics. So they can focus on writing clever software. Building algorithms is something every developer loves! That sounds ideal for Uniface. And it is! I am very curious about your first AAI applications! For samples, tools, add-on's, blogs and more, visit openuniface.com
European Women in Technology 2017 - an event with attendees from every corner of the continent, was held in November in Amsterdam, and so it was a great opportunity for women working in Uniface to be a part of the event. It was an excellent platform to enable the tech sector to connect, learn about what is going on around the industry and to be inspired by the many women achieving fulfilling and interesting careers in technology.
In this blog, we take 3 perspectives from those who attended. First up, Jyoti Singh, Software Developer:
The two day conference consisted of multiple parallel sessions running: inspirational keynotes, personal and career development workshops, technical classes, and networking opportunities; in short it contained all you need to progress and flourish in the tech sector.
As a whole, the event was significant from following different perspectives:
- Inspirational and Motivational - It was an incredible experience to hear and learn from successful people in tech about championing women, the importance of female role models, accelerating career, getting into the boardroom etc. Some very interesting talks were about how to build confidence, use the right body language and market yourself to maximize your potential. It was also very encouraging to see how many women are leading in their career along with taking charge of their Life-Work Balance and succeeding in ever-Changing Technology World. It was a perfect learning for a reflection on your own career and where you are heading.
Be Tech-savvy - Few sessions were targeted on latest trends and emerging technology, some are listed below :
- Big data - being one of the hottest buzzwords across industry, but despite the hype there are challenges of distributed data storage and how to store and process big data are not yet fully understood. There were some good analysis in the session about how to approach these challenges by dispelling some myths, pointing out the pros and cons of various solutions available on the market and giving some tips on building reliable data pipelines.
- Augmented Reality (AR) - Explaining how to build with AR using tools such as- ARCore and ARKit and find out the potential of AR for innovation in marketing and in product.
- Build Chatbots - with Amazon Lex - Amazon Lex is an AWS service for building conversational interfaces for applications using voice and text. The session was explaining that with Amazon Lex, you can build sophisticated, natural language chatbots into your applications to create new user experiences.
- Browser Peer to Peer Connections - How to create a server-less Realtime multiplayer game using peer to peer connections in the browser, making use of the WebRTC and dart technology.
- Transforming the World with Artificial Intelligence (AI) - the hottest topic in technology - AI is not scary and that it can even be the exact opposite. The session explained that how AI is already helping people to do amazing things. How AI is can be used in our daily lives. Example: intelligent machines, self driving cars, smart camera’s, your own digital personal assistant, ways to discover new forms of medical treatment & much more!
- IoT and the Cloud - how to leverage Amazon Web Services (AWS) to build a real connected product which includes securely ingesting and sending data to the cloud and enabling device-device communication.
- Networking - Last but very important, it was a great opportunity to meet like minded people and build the connections.
The entire event was truly inspiring and thought provoking, and here I would like to end writing with my favourite quote from the sessions:
Next is Krissie Towikromo, Marketing Analyst:
In my whole career in the IT industry this was the first time that I attended the European Women in Technology Conference 2017. I went to this event with no baggage and no expectations. I wondered: “why is there a need for such a big event for Women in IT?” I was overwhelmed by the passionate, positive and uplifting stories from the speakers. We heard great stories from IBM, Microsoft and Adidas, among others. During the inspiring sessions women explained the path they followed to get where they are today. The workshops were fun as some of the sessions were interactive and attendees could really participate in them. You could visit one of the 31 companies on the exhibition floor who were there to show off their solutions and recruit talented women. The lack of woman in tech does exist in 2017, and talking to all these talented women made me again realize we are far from ‘there’ yet.
Finally, from Christy Hillebrink, Marketing Director:
In one way it’s a pity that such an event exists—highlighting the shortage of female talent in the field of IT. However, the same can be said for other industries as well. Teaching for example is an area where the number of women far outweigh the number of men. So while it feels strange to have a specific event on this topic, on the other hand, it’s great that the lack of women in IT functions is being put on the radar and being talked about. That can only lead to more awareness and action from women, men and companies alike. The focus of the event was around diversity and inclusion—and how companies that operate with these foundations can find more success than if they aren’t actively working in these areas. For me personally, there were several takeaways, thoughts and inspiration that I would like to share (in completely random order):
- School curriculum and the promotion of IT topics in education is severely limited and outdated. This hurts everyone.
- Be a “learn it all” vs a “know it all” to empower others and advance in your own career.
- IBM prediction: medical labs “on a chip” will trace disease and predict our health. Cool!
- Diversity is more than gender and race, and building teams based on which talents individuals can bring to the table is an art form.
- Tech tracks being led by women engineers (everything from AI to blockchain to machine learning and everything in between) underlined that embracing IT is an opportunity for everyone.
It was very unique to attend an IT event with so many women. While there is not a quick fix, or even a concrete solution for having more women in IT, events like this help create a step in the right direction.
With the release of patch F205 for Uniface 10.2.02, the Uniface 10 compiler has changed to ensure compatibility with Uniface 9 for triggers having default behavior. This blog explains when and how Uniface handles ‘empty’ triggers and invokes default behavior. A small subset of the triggers in the Uniface model (*) falls back on default behavior if these triggers do not contain executable code. A typical example is the On Error trigger for a field or entity. If you do not define the trigger, the Uniface run time engine will still invoke default handling for error situations. If the trigger has been defined with executable code, only that code is executed, and the default behavior is suppressed. (*) see the Uniface documentation, section “Triggers with Default Behavior” for the complete list of applicable triggers.
When does a trigger *not* have executable code and revert to default behavior?
In Uniface 10, default behavior is invoked if any of the following conditions is true:
- the trigger is not declared at all
- the trigger declaration contains no executable code
- the trigger declaration only contains one or more pre-compiler directives that do not result in executable code
- the trigger is undeclared.
What is the impact of the Uniface Inheritance model, how to restore default behavior?
Behavior defined in code containers is inherited at ‘lower’ or ‘derived’ levels. Examples:
- a modeled entity subtype and its fields inherit from a supertype and its fields
- a component can inherit from a modeled component (called component template in Uniface 9)
- an entity or a field in a component’s data structure inherits from the modeled entity or field.
Inheritance can take place over multiple levels, but that’s beyond the scope of this blog. In Uniface 10, inheritance of code in containers is module-based. Code is contained in explicitly-declared triggers, entries and operations. If a trigger is declared again on the inheriting level, that definition takes preference and replaces the definition that was inherited from the higher level. To suppress an inherited trigger in Uniface 10, use any of the options described above: declare an empty trigger, declare a trigger with comment only, or undeclare the trigger on the lower level. An ‘empty’ trigger or undeclared trigger will fall back on default behavior if that is applicable for that trigger. The following table shows some examples:
|Modeled Field trigger error||Component field trigger error||Result|
|not declared||not declared||Default error handling|
|trigger error end||not declared||Default error handling|
|trigger error if ($error = 0105) … some code return -1 endif||not declared||User defined error handling|
|trigger error if ($error = 0105) … some code return -1 endif||trigger error end||Default error handling|
|trigger error if ($error = 0105) … some code return -1 endif||undeclare trigger error end||Default error handling|
What has changed since patch F205?
With the solution for Issue # 31689 in patch F205 (Uniface 10.2.02), explicitly-declared triggers that are effectively empty now fall back on default behavior, if that is applicable for that trigger. Before this patch, an explicitly declared trigger in Uniface 10 without executable code or with comment only would not only break inheritance, but also suppress its default behavior. Prior to F205, the only way to ensure that default behavior was invoked was to not declare the trigger or to undeclare the trigger. In case of inheritance of a trigger from a higher level, the only way to restore default behavior on the lower level was to undeclare the trigger.
What has changed since patch F206?
In patch F206 the automatic migration logic in Uniface 10 was changed to benefit from the modifications in patch F205. Before patch F206, the migration would attempt to assess whether a trigger container coming from Uniface 9 with potential default behavior contained comment or entry declarations only. If so, the trigger would be commented out or undeclared. This approach was not watertight and had a few disadvantages, like adding code during the migration (‘code pollution’) and causing additional compiler warnings compared to Uniface 9. When migrating a Uniface 9.6 or 9.7 export file into Uniface 10 after installing patch F206, all triggers, including those with potential default behavior, are migrated ‘as is’. Patch F205 is compatible with code migrated into a Uniface 10 using a patch prior to F206, i.e. there is no need to redo the migration. However, if you want to benefit from the changes in the migration, you should migrate after installing patch F206 or higher.
Last month, the Uniface mobile team added support for the latest mobile operating systems (OSs) available in the market. Since both Apple and Google have released new versions of their latest OS, we wanted to make sure that Uniface mobile apps can be run and deployed on them. So from now on, Uniface customers can run and deploy their mobile apps on Android OS versions from 4.4 up to Android 8.0 Oreo and on iOS from iOS 8.0.0 to iOS 11.0.0. To take advantage of the support for the new versions, all that customers will need to do is recompile their apps using Uniface and Buildozer. For us, adding support for the latest mobile OS means making sure that a Uniface mobile app successfully runs on all these above-mentioned OSs and a Uniface mobile app can be deployed on these OSs. If a mobile app is not compatible with a mobile device’s OS then Google Play or the App Store would filter the app from the 'app to install list' making it difficult to deploy on that device. To verify these scenarios, we developed our own mobile app and tested it on all the platforms in the product availability matrix. The Uniface previewer app’s latest release is an example of a Uniface mobile app which can be run and deployed on iOS 11 and Android 8.0. [caption id="attachment_7487" align="alignnone" width="234"] Uniface Previewer app[/caption] I would also like to mention that now we have made our apps forward compatible for both Android and iOS. To illustrate, here is an example:
- a Uniface app is built and deployed on O.S ‘x’
- a user device goes for a system update
- this results in upgrading the OS to ‘x+1’
- the Uniface app would continue to work on the later OS.
Personally, I really enjoyed working in this project. This gave us a better insight of a mobile app development and deployment cycle. We hope our customers find this as useful as we do.
Good luck Max Verstappen (twitter:@Max33Verstappen) on getting podium places at USA and Mexico after the great achievements of Malaysia and Japan! As said before, the pit stops improved. By incorporating all developments in technology as well as fine-tuning the roles within the team the pit stops were made as efficient as possible “difficult to beat 1.9 seconds”. All in all, pit stops contributed the most when used strategically to win races; that means that based on the efficiency level attained on a perfectly synchronized process with flawless collaboration of the team, the squads gained an advantage of 26 seconds over the opponents. Without going into the overall strategy, rest assured that making the team work efficiently is not a mere milestone, it is constant practice and sharp focus from all involved members including the crucial factor of trust. As Michael Schumacher said - “When you start out in a team, you have to get the teamwork going and then you get something back.” In that sense, at Uniface our teams have reached the level of efficiency which allows us to release our software on many platforms and for two different versions of the product every scrum sprint (every 2 weeks). We can still improve and we keep on doing that as KAIZEN are part of our DNA nowadays. And the biggest achievement we see is, as in F1, being able to apply that predictable process to the overall strategy. Let me go back to what happened at the F1 with the pit stop, once the team had mastered the level of efficiency, the squad decided to think out of the box and not concentrate only on the pit stop but on the overall performance of the race. At Uniface, we are aligning the business with IT to look at the overall strategy, although we still improve our SCRUM ceremonies. We think that the areas where we will gain the most are vision, strategy, roadmap, backlog management and overall in open two-way communication. I’ll keep you updated with the progress on this fascinating project. Remember, undercutting is the art of knowing when the competitor will stop or come back to the race so that you can intentionally beat him or her by planning your own pit stop accordingly. In my opinion, we need to make the most out of the well-performed process of delivering software to use the ever-changing priorities and hit the market with the software our customers/prospects need on time. We come from an 18-month cycle (~78 weeks) to a 2-week cycle to release software, now we need to use that to strategically deliver what helps our customers the most… in a changing world. Food for thought The following table is an attempt to compare pit stops and scrum sprints, I know it is not perfect but its intention is to spark thoughts. Let me know what your think about it. Enjoy!
|Pit stops||Symmetry||SCRUM Sprints|
|Used in Race strategy||Goal is to win||Used in Delivery strategy|
|Execution of the pit stop||Synchronized perfection||Sprint work|
|Pit crew||Highly trained technical members||Development team|
|Team / squad||Harmonious collaboration||Scrum team|
|Preparation Changing tires Refueling Adjusting car||Tasks mastered by the team||Architecture Coding Testing Delivering stories|
|Changing rules||Adapt / Fast response||Changing requirements|
On October 19th I will be presenting at QUBE’s inspiration session. I would like to invite you to join the virtual event. For more information and to register visit: http://qube.cc/inspiration/ I would expect that any business could innovate incrementally in the way I’ve just been describing in my blog series, and many would find it vital to do so. Yet organizations can easily find themselves stuck when it comes to innovation. They don’t always realize how much they can gain right now from moving forward, or how much they have to lose should others overtake them. For many businesses, when it comes to IT, the type of innovation to focus on could be improving user experience, making them more efficient, by creatively using and connecting what is already there. This in turn can contribute to a virtuous circle of growing business agility and innovation. By becoming more agile about the way they innovate with technology, companies can become more responsive, freeing themselves up for business innovation. Mobile is one of the most important ways to unlock innovation. The first step of moving existing capabilities to mobile isn’t necessarily very innovative; however, it can lead to many innovative possibilities. Could moving some functionality to mobile unlock innovation and hence agility for your business? Is there some other evolutionary step you could take that would do the same? This series is based on the paper: Agility and Innovation in Application and Mobile Development. You can download the paper here.
Congratulations to Max Verstappen on winning the Malaysian Grand Prix last weekend. You see, strategy pays out when everything falls into place. So, my drive 😉 is to apply scrum in your business strategy to win the race too. So in F1 the pit stop, besides being a masterly synchronized ballet of disciplined execution and expertise, the pit stop is used strategically by the team to win the race. How? The amount of pit stops depends on the desired lap time while gauging fuel consumption, tire wearing out, undercutting (taking over a car while making the pit stop or leaving one). With the above in mind the team determines to use certain amount of pit stops, or to add one more in order to win. In SCRUM terms, the sprints are the perfectly synchronized production of software which can be strategically used to deliver value to our customers. Whether we deliver features gradually or change the order of delivery as to meet business value. Here at Uniface, we are busy trying to get SCRUM to the next level where alignment between business and IT are essential to make a difference. We must be aligned to adapt to change and therefore better serve our customers. In that context, we already have a track record as we have been using SCRUM for more than 9 years and have done the necessary improvements to the processes ourselves. As an example, we have even invented our own ceremony to facilitate the alignment among teams called a Sprint Pitch (an already 3-year-old ceremony for us). To stress why aligning the business with IT is important, I want to emphasize the analogy from the F1 championships; I was inspired to use it when watching a Red Bull documentary about “<a href="https://www.redbull.tv/video/AP-1P5DU67BD1W11/the-history-of-the-pit-stop" target="_blank" rel="noopener">The history of the pit stop</a>” during my last flight. You know the thrill of changing tires and refueling the car in the shortest amount of time possible? In the early days, the pit stop was just a pause that took up to a minute, there was no changing of tires. That came in the 1970’s when an unplanned pit stop to change tires would take 3 to 5 minutes. In the early 1980’s <a href="https://en.wikipedia.org/wiki/Gordon_Murray" target="_blank" rel="noopener">Gordon Murray</a> turned them into the strategic pit stops, considering the car weight, the tire degradation and saw a relation on how all that influenced lap times. At that moment another race began, the one to bring the pit stop’s time down to the minimum. In order, to use the pit stop more strategically and make the time necessary for <a href="https://www.youtube.com/watch?v=NaWt5zRwOWY" target="_blank" rel="noopener">a pit stop</a> negligible. Well, it is no surprise that to reach the shortest time, it took analysis, collaboration, improvements to get to the changing of the tires or better even the entire wheel set and refueling the car, cooling the car’s engine in just under 2 seconds. Bear in mind that <a href="https://www.youtube.com/watch?v=o8KTXtvqbtg" target="_blank" rel="noopener">actually it takes a crew of 18 to 20 highly skilled individuals to handle a pit stop</a>. <a href="https://unifaceinfo.com/scrum-pit-stops/jorge2-2/" rel="attachment wp-att-7393"><img class="alignnone wp-image-7393" src="https://unifaceinfo.com/wp-content/uploads/2017/10/Jorge2.png" alt="" width="404" height="243" /></a> You may wonder how do we do that in SCRUM at Uniface, but first time for a pit stop … (to be continued)! </p>
In this blog series I’ve covered how innovation can be evolutionary, but what does this look like in the real world? Mobile technology is a great example of the power of evolutionary innovation, and is proving to be a major way of doing things better. Although mobile apps may have been just a fun distraction until relatively recently, companies increasingly see them as a way of unlocking their enterprise. In some environments, such as academia, users have already come to expect the applications they use to be accessible via mobile devices – and consumerization means this is increasingly the case across the board. What’s more, provision of mobile support often needs to happen fast.
Mobile lends itself to evolutionary approaches
You can add a lot of value by simply delivering existing applications’ business functionality via a mobile device, especially given they are typically always on and at hand 24x7. Porting key business tasks to mobile is a prime example of evolutionary innovation, especially as putting functionality on a mobile device can unlock many more innovative ideas. These could be as simple as capturing expenses on the fly, putting an end to lost receipts and time-consuming monthly admin. Ideas could also be more ambitious. For example, a retailer could use location data to avoid missed deliveries and present alternative drop-off locations should the customer be away from home. Today, we already see notifications being heavily used to maximize the efficiency of deliveries. With the right development platform, you can do this without much additional overhead. The team can focus on building good, responsive applications that can be deployed across platforms, whether desktop, web or mobile. It is worth noting that for mobile apps, you don’t have to deliver the whole enterprise solution, just key processes that are relevant to the mobile platform. For example, in an HR application you can save a lot of time and money by putting holiday or expenses approvals on a manager’s phone. This is far more efficient than checking emails and possibly forgetting to act, as they can do the job with one or two touches in response to a notification.
Mobile promotes innovation
What we’ve just described can be truly innovative and evolutionary because you go back to basics. You start by thinking about what’s going on in a manager’s day and what they need to work smarter. You then enable that vision one bite at a time, reusing functionality you already have. As well as reusing your existing solutions on mobile, you can also innovate by combining them with other technology. For example, a mobile device can continually gather information about location and other aspects of the user’s situation, presenting the right options when most appropriate. You can take advantage of all this real-time information to make your applications better. For example: A salesperson could be alerted when they are in the neighborhood of a new lead. While a shop assistant is talking to a customer, the assistant’s augmented reality glasses could feed live facts about products they’re looking at. A building automation app could use geolocation information to manage lighting, heating and security as a person navigates the location.
A choice of approaches
Mobile innovation can pay big dividends. How evolutionary it is depends partly on the approach you adopt. Possible approaches range from native device development through to mobile web sites. Each has its pros and cons. By taking a pragmatic view, it is possible to combine the best aspects of different approaches. A hybrid approach combines native and mobile web development, arguably giving you the best of both worlds. It yields opportunities to reuse much of your current functionality and team skills, while also taking advantage of device features. This opens up many innovative ways to improve user experience and efficiency when using the business application. This series is based on the paper: Agility and Innovation in Application and Mobile Development. You can download the paper here.
In this blog post I discuss how Uniface uses Authenticode for signing Uniface executables on the Windows platform. A word on the merits of signing your executables. Code signing is nothing more than taking an executable and calculating a checksum, attaching that checksum in a cryptographically secure way to the executable. This way any customer can be assured of the integrity of the code they download from Uniface's download server: it has not been tampered with nor was it altered while in transit. The use of a public-private key certificate from a reputable vendor adds the advantage that you can rest assured the code you downloaded really originated from Uniface. Designing code signing requires you to take a step back revisiting your existing processes to identify potential issues. Any statements on the integrity of the code can only be satisfied if you manage your code signing keys in a defined way. We have a defined process of managing where keys reside, who has access to them and what people having access can do. Keys reside in a physically secured location, with access being controlled and limited to a small group of people in the company. Only these people can get their hands on the Uniface code signing keys for a limited set of defined purposes. Strict logging is in place so that key usage can be reviewed from audit logs. The Uniface build factory consists of machines, that take source code from a version control system and run the build scripts to produce signed executables. The code signing is run directly from our makefiles. We use a set of 'internal' certificates when building Uniface. Machines that are part of the Uniface build factory have access to the ‘internal’ certificate and physical access to the build machine is limited. Only Windows executables that were produced in an official build can thus be signed using this ‘internal’ certificate. The certificate is only valid within the Uniface development lab. Outside the Uniface development lab a machine with Windows installed would lack the Uniface development lab ‘root’ certificate, which is needed to build a trust chain required to validate executables signed with the ‘internal’ certificate. Once we package a Uniface q-patch, patch, service pack or distribution, we also sign these deliverables. This effectively seals the package and protects its contents from unauthorized modifications. We also timestamp all files, which means all signed files also carry an official counter signature with a timestamp. Should there be an irregularity, forcing us to withdraw our software, we can do this by revoking our certificate. This comes in two flavours: Either we fully revoke a certificate, or we revoke the certificate from a certain cut off timestamp. When the certificate is fully revoked, all files signed with this certificate become invalid and hence cannot be trusted anymore. If the exact moment in time when the irregularity occurred is known, we can revoke the certificate from this moment in time. This results in all files signed after this moment to become invalid. Files signed before this moment in time remain valid. When we decide that a package is ready for shipping to our customers, we go through a process of re-signing such a package with our ‘external’ certificate. This is done as part of the publication process. What we do is check every file in the package to see if it was signed using the Uniface ‘internal’ certificate. If a file was signed with the ‘internal’ certificate, it is resigned using our ‘external’ certificate. This ‘external’ certificate was obtained from a reputable vendor and the public key of the root certificate from that vendor is present in every Windows distribution. Hence using public-private key encryption, your Windows distribution can check that files that we have signed in our Uniface distribution have not been modified since we signed them and that the software is actually from us. So the next time you install Uniface, you can be sure the software is fine.