Friday, April 10, 2020

Introduction to Client Server Architecture

Introduction to Client Server Architecture
Published September 29, 2008 | By Andy Kramek

I am often surprised to find that many developers today still do not really understand what is meant by a "client server" architecture or what the difference between "tiers" and "layers" is. so i thought i would post the following explanation which, if not universally accepted, has served me well over the past 10 years.

evolution of client/server systems

computer system architecture has evolved along with the capabilities of the hardware used to run applications. the simplest (and earliest) of all was the "mainframe architecture" in which all operations and functionality are contained within the central (or "host") computer. users interacted with the host through 'dumb' terminals which transmitted instructions, by capturing keystrokes, to the host and displayed the results of those instructions for the user. such applications were typically character based and, despite the relatively large computing power of the mainframe hosts were often relatively slow and cumbersome to use because of the need to transmit every keystroke back to the host.

the introduction and widespread acceptance of the pc, with its own native computing power and graphical user interface made it possible for applications to become more sophisticated and the expansion of networked systems led to the second major type of system architecture, "file sharing". in this architecture the pc (or "workstation") downloads files from a dedicated "file server" and then runs the application (including data) locally. this works well when the shared usage is low, update contention is low, and the volume of data to be transferred is low. however it rapidly became clear that file sharing choked as networks grew larger, and the applications running on them grew more complex and required ever larger amounts of data to be transmitted back and forth.

the problems associated with handling large, data-centric applications, over file sharing networks led directly to the development of the client/server architecture in the early 1980s. in this approach the file server is replaced by a database server (the "server") which, instead of merely transmitting and saving files to its connected workstations (the "clients") receives and actually executes requests for data, returning only the result sets to the client. by providing a query response rather than a total file transfer this architecture significantly decreases network traffic. this allowed for the development of applications in which multiple users could update data through gui front ends connected to a single shared database.

typically either structured query language (sql) or remote procedure calls (rpcs) are used to communicate between the client and server. there are several variants of the basic client/server architecture as described below.

the two tier architecture

in a two tier architecture the workload is divided between the server (which hosts the database) and the client (which hosts the user interface). in reality these are normally located on separate physical machines but there is no absolute requirement for this to be the case. providing that the tiers are logically separated they can be hosted (e.g. for development and testing) on the same computer (figure 1).



figure 1: basic two-tier architecture

the distribution of application logic and processing in this model was, and is, problematic. if the client is 'smart' and hosts the main application processing then there are issues associated with distributing, installing and maintaining the application because each client needs its own local copy of the software. if the client is 'dumb' the application logic and processing must be implemented in the database and then becomes totally dependent on the specific dbms being used. in either scenario, each client must also have a log-in to the database and the necessary rights to carry out whatever functions are required by the application. however, the two tier client/server architecture proved to be a good solution when the user population work is relatively small (up to about 100 concurrent users) but it rapidly proved to have a number of limitations.

· performance: as the user population grows, performance begins to deteriorate. this is the direct result of each user having their own connection to the server which means that the server has to keep all these connections live (using "keep-alive" messages) even when no work is being done

· security: each user must have their own individual access to the database, and be granted whatever rights may be required in order to run the application. apart from the security issues that this raises, maintaining users rapidly becomes a major task in its own right. this is especially problematic when new features/functionality have to be added to the application and users rights need to be updated

· capability: no matter what type of client is used, much of the data processing has to be located in the database which means that it is totally dependent upon the capabilities, and implementation, provided by the database manufacturer. this can seriously limit application functionality because different databases support different functionality, use different programming languages and even implement such basic tools as triggers differently

· portability: since the two-tier architecture is so dependent upon the specific database implementation, porting an existing application to a different dbms becomes a major issue. this is especially apparent in the case of vertical market applications where the choice of dbms is not determined by the vendor

having said that, this architecture found a new lease of life in the internet age. it can work well in a disconnected environment where the ui is essentially dumb (i.e. a browser). however, in many ways this implementation harks back to the original mainframe architecture and indeed, a browser based, two-tier application, can (and usually does) suffer from many of the same issues.

the three tier architecture

in an effort to overcome the limitations of the two-tier architecture outlined above, an additional tier was introduced – creating what is now the standard three-tier client/server model. the purpose of the additional tier (usually referred to as the "middle" or "rules" tier) is to handle application execution and database management. as with the two-tier model, the tiers can either be implemented on different physical machines (figure 2), or multiple tiers may be co-hosted on a single machine.



figure 2: basic three tier architecture

by introducing the middle tier, the limitations of the two-tier architecture are largely removed and the result is a much more flexible, and scalable, system. since clients now connect only to the application server, not directly to the data server, the load of maintaining connections is removed, as is the requirement to implement application logic within the database. the database can now be relegated to its proper role of managing the storage and retrieval of data, while application logic and processing can be handled in whatever application is most appropriate for the task. the development of operating systems to include such features as connection pooling, queuing and distributed transaction processing has enhanced (and simplified) the development of the middle tier.

notice that, in this model, the application server does not drive the user interface, nor does it actually handle data requests directly. instead it allows multiple clients to share business logic, computations, and access to the data retrieval engine that it exposes. this has the major advantage that the client needs less software and no longer need a direct connection to the database, so there is less security to worry about. consequently applications are more scalable, and support and installation costs are significantly less for a single server than for maintaining applications directly on a desktop client or even a two-tier design.

there are many variants of the basic three-tier model designed to handle different application requirements. these include distributed transaction processing (where multiple dbms are updated in a single transaction), message based applications (where applications do not communicate in real-time) and cross-platform interoperability (object request broker or "orb" applications).

the multi or n-tier architecture

with the growth of internet based applications a common enhancement of the basic three-tier client server model has been the addition of extra tiers, such architecture is referred to as 'n-tier' and typically comprises four tiers (figure 3) where the web server is responsible for handling the connection between client browsers and the application server. the benefit is simply that multiple web servers can connect to a single application server, thereby handling more concurrent users.



figure 3: n-tier architecture

tiers vs layers

these terms are often (regrettably) used interchangeably. however they really are distinct and have definite meanings. the basic difference is that tiers are physical, while layers are logical. in other words a tier can theoretically be deployed independently on a dedicated computer, while a layer is a logical separation within a tier (figure 4). the typical three-tier model described above normally contains at least seven layers, split across the three tiers.

the key thing to remember about a layered architecture is that requests and responses each flow in one direction only and that layers may never be "skipped". thus in the model shown in figure 4, the only layer that can address layer "e" (the data access layer) is layer "d" (the rules layer). similarly layer "c" (the application validation layer) can only respond to requests from layer "b" (the error handling layer) .



figure 4: tiers are divided into logical layers

No comments:

Post a Comment

Writing better code (Part 1)

Writing better code (Part 1) As we all know, Visual FoxPro provides an extremely rich and varied development environment but sometimes to...