Software Architecture: Where is the Beef?

In the past 30 years, the hardware infrastructure that supports software applications has changed many times. At the beginning were the mainframes and the minicomputers, highly proprietary hardware with dumb terminals in front of the user. With the arrival of the personal computers, developers started using the power of the user machine. Organizations switched to a client server model for large applications.

The user interface and part of the logic would be located on the PC. It was a time where compiled languages were the kings and IDE promoted visual development. With the web, the processing switched back to servers, especially at the beginning before JavaScript and the AJAX model push back some logic towards the browser. Problems arose because customers used multiple browsers with different behaviors. This period has also seen the return of interpreted languages like PHP or Ruby on the server side. The current trend is toward mobile applications and a situation where connectivity is not always guaranteed between the devices and external servers. Software developers try again to use some of the processing power of the device for their application, taking out some code and data out of the servers to locate them on the mobile phone. The debate is open again between interpreted and compiled languages to achieve better performance on limited device resources. It is however a little bit more difficult than in the client-server days. You have to deal with the multiplicity and rapid evolution of devices and operating systems and the fact that network are open. The Cloud makes the architecture even more complex as an application could run partly on a customer side, partly in the company servers but ask for data or other software services provided by an external supplier. This supplier could have its services hosted on another third party Cloud infrastructure. This is why when the Amazon Cloud sneezes, some mobile customers could not use their favorite apps anymore.

What is the impact of such changes for software developers? Modularity is an important concept in software design and architecture. This allows changing the software technology or moving part of an application between different infrastructures. This could be one of the reasons behind the success of a technology like node.js that allows running the same JavaScript code on both the server and the browser side. Modularity has its drawbacks with a negative aspect on performance when modules call each other and possible network latency. In an open and connected architecture, developers must have contingency plans when connectivity is not possible. Mobile apps should thus be able to work even with limited connectivity. These apps will for instance scan the network and update themselves and their data when connection is available. As architecture spreads on multiple location, this also increases the need for security as the application could be attacked from many entry points or during the transmission of data between separated software layers. You have to decide how sensitive information could be distributed or transmitted on third-party platform (cloud, mobile networks).

What will be the next infrastructure for software? I don’t know, but I am sure that software developers will have to think deeper to achieve the sometimes contradictory objectives of software architecture that are evolutivity and application performance. Even in an Agile context where people sometime restrict their horizon to the end of the next sprint, software architecture is an important activity for the long-term viability of your software applications.

Related Content: