Recent Post

Searching...
July 30, 2013

Essay on System Integration and Control

System integration and control

Introduction:

The advancement in computing power has emerged the desire of getting more from it. The science has made possible for it to produce and experiment more functioning of computer system at a continuous basis. New techniques have been introduced that have lowered down the computational complexity and robustness. Due to these advancements new aspects of computer system integration and control has been introduced. It is now possible to integrate a large number of programs into a single system; the total count of these programs can be in exponent. The introduction of a large number of systems into a single system has also produced a requirement of system control so that the attention and processing of these systems can be obtained optimally (Eklundh, 1991).

      The system integration of computer has been done so smartly that it has made functionality of the PC network, MRP II and distributed control system independent and they are considered as an autonomous land of information. A management information system has been placed in the process control system that enables the technologies of computer system. The whole system of the computer works just like a resource management team works or there is an enterprise within the whole system which has finance, marketing, sales, production and engineering departments (National Academy of Engineering, National Research Council (U.S.). Commission on Behavioral and Social Sc, 1991).

     It is mostly underestimated that the simultaneous control of hardware and software is a dreadful challenge and the sensing of hardware during the processing of a program was very hard. In order to fulfill goals, an external hardware can be implemented that works for special purpose and it collaborate the inputs from hardware sources by sensing them with the internal programs so that the response to the events of system will be in real time. The development of these systems with the combination of these limitations is a frightening task. This is also required that along with the proper system integration and control it should consume less power; the hardware should be light and shock proof, it system should be made compatible with the undesired changes in the environment or insensitive to unexpected events, and work intelligently in case of failure and has the ability to recover itself (Dudek, Jenkin, 2000).

Parallel computing:

The desire to get more from the computer system has introduced large problems that sometimes does not requires a lot of computation and numerical analysis but it challenges the state of art. Modes of computing and its architecture are now changed in order to fulfill the demand. There are a lot of ways that a computer scientist uses to define a machine’s architecture. The most common and most used approach was developed by Michael Flynn. In this approach he classifies a machine according to the number of instructions that can be performed on the number of data stream that a machine uses for processing. He called these architectures as single instruction stream, multiple data stream, (SIMD) or multiple instruction streams, multiple data streams machines. This is not the limit. There is another approach that is being used by the computer scientists to classify the machine. In this classification they distribute the machines according to the memory architecture of the machine. Shared memory is one type of classification in which there are a number of processors that are utilizing the same memory or sharing the same memory where as distributed architecture is that architecture in which each processor has its own separate memory means no sharing, in this way the communication between system components take place (Lafferty, 1993).

     Parallelism is all about doing so many things at one time. In order to get faster functionality of with a proper control of system components and programs some form of parallel computing is used although a high speed computer may have only one processor. The working of programs simultaneously can be defined by the example that we have three cars to wash and it takes three steps to complete the task which consume three minutes. But we have five minutes in total to wash all the three cars. This can only be done by washing the all three cars simultaneously. This is similar to the case we deal daily in our computer systems in which we run a lot of programs concurrently. Pipelining is a term used in computer systems which enables the system to use various functions more efficiently and at the same time. The example of Pipeline work will be the addition of floating point number which requires four steps to complete the calculation. These steps include comparison of exponents, shifting of augends according to rule until they are equal, addition of mantissa and normalization of result. Pipelining will work like if we are adding 32 pairs of numbers than the machine will compare the exponent of the first pair and as soon as the hardware has finished the comparison state, the next pair will enter the pipeline for the comparison process and the first pair will enter the stage of shifting exponents. These steps will be processed concurrently and are continuous until the calculations of all the thirty two pairs are completed. In this way the speed of processing can be increased by pipelining over sequential processing. But the pipeline has to be full in order to keep the processing hardware busy (Edward L. Lafferty, 1993. Parallel computing: an introduction. Noyes Data Corporation).
System bus:
In order to maximize the functionality of computer system, simultaneous and parallel computing was introduced. When several systems are working together the CPU has to send various information, data and instructions to its devices and components that are in working condition and even to the peripherals and hardware that are attached to the computer system. Network line and electronic pathways that is present at the bottom of the motherboard are used to join the components of the computer system together. These electronic paths that comprises of tiny wires carry data, instruction, and information or signals and work to make it possible for the components of the computer system to communicate. These are basically the bus that carries important signals. A computer system has a lot of different kinds of buses. These kinds includes internal bus, expansion bus, external bus, data bus, memory bus, PCI bus, ISA bus, address bus, control bus and so on. Each has its own specific functionality.

     Most commonly the computer has two major types divided by most scientists; the internal and the external bus. It is the responsibility of the internal bus to connect the CPU, system memory and other components together. It is also known as system bus where as the external bus has the responsibility to connect the peripherals, hardware, expansion slot, I/O ports or all external devices and drive connections along the computer. It means that the external bus add the external devices to the computer system for communication and expands the capabilities of computer system. This bus is also referred as the expansion bus. These tiny wire, traces, or electronic roads plays very important role in the communication and transfer of data. These are the ways that enables the computer components to communicate with each other. One of the two main types of bus carries information between the components of the computer system on the motherboard and the other one connect the external components to enhance the communication and functionality of system by attaching them with the computer. Data is the integral element that the electronic paths contain and transfer between the components. Within these internal and external buses some paths are dedicated for special purpose like some buses are assigned the task of carrying data throughout the computer and these buses are called data buses. System memory is the place where data is stored process and manipulated. (Pc computer notes)
The external bus interface
Bus protocols:
There are certain rules according to which data moves between component and these rules are the essence of buses. The rules that are used by the bus are called bus protocols. It controls the transfer of data on the PCI bus. A serial interrupt bus protocol is a protocol that is implemented when a peripheral is attached to the computer system and signals its interrupt. It is a higher priority of interrupt. When the peripherals are attached and added on the serial interrupt bus it has to integrate with the machine logic so that the cycling can take place with the help of interrupt states. When a strong signal is sent to the serial interrupt controller than it determines that which interrupt has to give priority to be send to the system’s interrupt controller according to the interrupt’s state on the machine logic (free patents online). Another bus protocol is I2C bus protocol which implemented on the inter integrated circuit bus. It is used in multi master systems to permit collision detection, clock synchronization, and hand shaking. In computer system peripherals are slaves and microcontroller is the master. The master is able to generate the clock where as slave holds for the generation of wait state (epanorama.net). Bus snooping protocol is a very important protocol in symmetric multiprocessing because it is used to maintain the cache coherency. In snooping state the bus check for the availability of the copy of the requested data on the cache because cache is the place where copy of the each data block present in physical memory is placed. There are two main snooping protocols write invalidate and write update. Mostly all the protocols of buses deal with the read and write functionality of the system components in order to communication fluent (snooping protocol).
Interrupt processing:
In an I-stream engine there is a need for the coordination of multi programming. The interrupt mechanism is used for this coordination. An interrupt is usually generated in the mid of the completion and arrival of the new instruction for interpretation. There are six possible classes that an interrupt has. They are external, machine check, I/O, program, restart, and supervision call. A PSW is associated with each of the interrupt class. There are a lot of interrupts related to hardware and software. These interrupts include hardware processing concurrent interrupt and software processing concurrent interrupt. The control and management of these interrupts are very important their proper control according to their priority accelerates the process of multiprocessing and is helpful in the integration of system components (IBM).
The bus adaptor chip places an interrupt request on the system internal bus
Performance enhancement:

The main goal is to enhance the performance of the system components in order to get more functionality from the computer simultaneously. There are several methods and techniques that have been introduced for this purpose. The buffering, caching, multiprocessing and compression all are used to enhance the performance of the computer system by facilitating the communication process of system components. Buffer is a memory region of the computer that keeps the data for a small amount of time when it is under processing and in this way it is helpful in enhancing the functionality of computer by reducing its time by making the processed part available in the buffer. On the other hand cache holds the copy of the data block that is present in the physical memory and when that part of the data block is needed then it can easily be retrieved from cache without paying the cost of time. Multiprocessing helps the system to let different programs run at the same time concurrently without influencing the functionality of any program. It is mostly achieved by pipelining. Compression is a technique by which programs are developed with less time complexity. This technique is another step towards the achievement of multiprocessing efficiently (Gunther, 1998).

Conclusion:

The advancement and efficiency in functionality of system components were obtained by the better integration and control of these components. The data is transferred between the system components throughout the computer by means of buses. These buses are in the shape of tiny wires and can be called as electronic paths through which data is controlled and travelled into other sub systems. These buses have some rules by which the transfer of data take place these rules are called bus interrupts. The permit of bus interrupts is required for the communication of data between the system components. Parallel computing is a field which introduced us with the term multiprocessing and made it possible by using pipeline. Interrupts are those threads or programs that are generated when a certain program needs to be solved first. They are a part of multiprocessing. Buffering and caching make multiprocessing even more easily, and it also saves our time by providing data from a temporary or cached memory. The integration and control of system components is now vital because this era is the era of multiprocessing (Yew, Xue, 1998).


References:

Eklundh Jan-Olof, 1991. Computer vision, ECCV '94: third European Conference on Computer ..., Volume 2. Springer verlag.

National Academy of Engineering, National Research Council (U.S.). Commission on Behavioral and Social Sc, 1991. People and technology in the workplace. National academy press.

Dudek Gregory, Jenkin Michael, 2000. Computational principles of mobile robotics. Cambridge university press.

Lafferty Edward L., 1993. Parallel computing: an introduction. Noyes Data Corporation.

Pc computer notes, the bus retrieved at 2 Sep 2010 from http://www.pccomputernotes.com/system_bus/bus01.htm

Free patents online, serial interrupt bus protocol retrieved at 2 Sep 2010 from http://www.freepatentsonline.com/6055372.html

epanorama.net, serial buses information page retrieved at 2 Sep 2010 from http://www.epanorama.net/links/serialbus.html

Snooping protocol retrieved at 2 Sep 2010 from http://www.epanorama.net/links/serialbus.html.

Gunther Neil J., 1998. Analyzing computer system performance with PERL::PDQ. springer.

Yew Pen-Chung, Xue Jingling, 1998. Advances in computer systems architecture: 9th Asia-Pacific conference. Springer.

 


0 comments:

Post a Comment