Sunday, February 22, 2009

Operating System Structure Part-2

In the last post we had seen the first method in which an operating system can be structured i.e Simple Structure.From the last post about simple structure it is very much clear that this method is not very much efficient and there may be other structuring techniques that are much more efficient than simple structure technique.The most important thing for the success of any operating system structure is its hardware support,if an operating system got the proper hardware support then it can be broken into pieces that are more smaller and more effective then that of simple structure.If this is done successfully then an operating system can have much more control over the computer and the applications that are used by computers.In this post we take a look at the other method of making an operating system more modular i.e Layered Approach.

Layered Approach:-In this method of structuring the whole operating system is divided into a number of layers,in which the lowest layer represents the hardware part and the highest layer represents its user interface part.Each layer in this model is an implementation of abstract object model that contains objects and the routines that are required to access these objects.In layered approach a layer can only invoked the operations of its lower level layers.This approach helps very much in simplifying debugging and system verification.The first layer can be debugged without any concern for the rest of the system because their is no layer below the first layer and underlying hardware are assumed to be functioned correctly.Once the first layer is debugged,we can assume its correct functioning and we can debugged the second layer and so on.If an error is found during debugging of that layer then that error belongs to that particular layer because all the underlying layer are already debugged.Thus, the design and implementation of the system is simplified.
The main problem that exists with the layered approach is in defining the functions of every layer because a layer can only use its lower level layers for its functioning.One more problem that exists with this layered approach is that they tend to be less efficient than other types.For instance consider an example,when an user program executes an I/O operation,it executes a system call that is trapped to the I/O layer,which then call the memory management layer,which in turn call the CPU scheduling layer and then passes to the hardware.In this whole process,the each layer add its corresponding overhead result in that system call takes more time in comparison to the non-layered approach.

Saturday, February 21, 2009

Operating System Structure Part-1

If a system is large as well as complex then it is required that it must be designed carefully so that it can function properly and can be modified easily.The most common approach to achieve this is to partition the whole system into small components rather than having a single big structure.Now there are four approaches for achieving this.In this post we take a look at the first approach for structuring the system i.e Simple Structure.

Simple Structure:As all we know that every commercial system do not have well defined structure,the main reason behind this is,initially when these system are developed they developed as a simple,small,having limited functionality and then with time they started to grew up beyond their scope.The most common example of such type of a commercial system is MS-DOS.Initially when it was written the designers of DOS have no idea that this operating system became so popular in future.The main aim of designing this MS-DOS is to provide each and every functionality in a very limited space.So,in achieving this it was not divided into modules carefully.

In MS-DOS,the interfaces and level of functionality are not well separated.The major drawback of this thing is that application programs are able to access the basic I/O routines that means they can directly read or write to the basic display and disk drives.Due to this problem,the MS-DOS is very much vulnerable to the malicious programs,causing entire system crashes on the failing of user program.The other major problem with MS-DOS is that it is limited to hardware of its era because at that time the Intel 8088 for which it was written provides no dual mode and no hardware protection.Hence,in short we can say that their exists a lot of problem with this simple structure.

Thursday, February 19, 2009

Parallel Communication

Parallel communication takes place when the physical layer is capable of carrying multiple bits from one device to another.This means that the data bus is composed of multiple data wires,in addition to control and possibly power wires,running in parallel from one device to another.Each wire carries one of the bits.Parallel communication has the advantage of high data throughput,if the length of the bus is short.The length of a parallel bus must be kept short because long parallel wires will result in high capacitance values and transmitting a bit on a bus with a higher capacitance will require more time to charge or discharge.In addition,small variations in the length of the individual wires of a parallel bus can cause the received of data bits at the different times.


Such misalignment of data becomes more of a problem as the length of a parallel bus increases.Another problem with parallel buses is the fact that they are more costly to construct and may be bulky,especially when considering the insulation that must be used to prevent the noise from each wire from interfering with the other wires.For ex:-a 32 wire cable connecting two devices together will cost much more and be larger than a two wire cable.In general,parallel communication is used when connecting devices resides on same IC,or the devices that reside on the same circuit board.Since,the length of such buses is short,the capacitance load,data misalignment and cost problems mentioned earlier do not play an important role.

Wednesday, February 18, 2009

Cache Replacement Policy

The cache replacement policy is the technique for choosing which cache block to replace when a fully associative cache is full,or when a set associative cache's line is full.One thing that you must note here is their is no choice in a direct-mapped cache that is a main memory address always maps to the same cache address and thus replaces whatever block is ready there.There are three common replacement policies for the cache.The first one is random replacement policy that chooses the block to replace randomly.While simple to implement,this policy does nothing to prevent replacing a block that is likely to be used soon.

The second replacement policy is least recently used(LRU) replacement policy replaces the block that has not been accessed for the longest time,assuming that this means that it is least likely to be accessed in near future.This technique provides for an excellent hit/miss ratio but requires expensive hardware to keep track of the times blocks are accessed.The third and last replacement policy is first in first out replacement policy uses a queue size N,pushing each block address onto the queue when the address is accessed,and then choosing the block to be replaced by popping the queue.

Sunday, February 8, 2009

IC Technolgy

Before we go in detail of IC technology first of all we have to understand what is IC,IC directly implies the Integrated Circuit.An IC,often called a chip,is a semiconductor device consisting of a set of connected transistors and other devices.Every processor must eventually implemented on an integrated circuit(IC).IC technology involves the manner in which we map a digital implementation onto an IC.A number of different processes exists to build semiconductor,the most popular of which is complementary metal oxide semi conductor(CMOS).IC technology is independent from processor technology that means any type of processor can be mapped to any type of IC technology.Now the main question that arises in front of us is what are the differences between the different IC technologies.

To understand the difference between different IC technologies we must first recognize that semiconductors consists of numerous layers.The bottom layers form the transistors.The middle layers form logic components.The top layers connect these components with wires.One way to create these layers is by depositing photo-sensitive chemicals on the chip surface and then shining light through masks to change region of the chemicals.Thus,the task of building the layers is actually one of designing appropriate masks.A set of masks is often called layout.

Saturday, February 7, 2009

Cache Memory

As all we know that,processing core and the memory are two integral parts of any CPU(Central Processing Unit).Now further this,memory is again divided into two categories slow memory and fast memory.Slow memory are RAM and hard disk,while Cache memory is the fast memory.The term slow and fast directly implies with the speed with which data can be accessed from these memory units.In simple words we can say cache memory is the memory that is used to store frequently used data so that the CPU doesn't have to wait for the data to be fetched from another slower storage areas like the system RAM or hard disk.

The main reason behind the fast accessing of data from cache memory is that it does not require refreshing of data and it is made of quite expensive static RAM.Once the data is stored in the cache, future use can be made by accessing the cached copy rather than re-fetching or re-computing the original data and this is what makes the average access time shorter. Cache, therefore, helps expedite data access that the CPU would otherwise need to fetch from main memory. The main advantage of cache memory is that performance of the CPU is increased to a great extent and it saves use of the read/write head which in turn extends its life too.Now a days cache memory is available in the range from 2 MB to 16 MB.

Friday, February 6, 2009

Single Purpose Processors

A single purpose processor is a digital circuit designed to execute exactly one program.An embedded system designer may create a single purpose processor by designing a custom digital circuit.Alternatively,the designer may purchase a predesigned single purpose processor.Many people refer to this part of the implementation simply as the hardware portion,although even software requires a hardware processor on which to run.A single purpose processor when used in embedded systems,it has both advantages and disadvantages over the general purpose processors.If we look at the advantages of the single purpose processor,then by using single purpose processor performance may be fast,size is small,less power consumption and low unit cost for large quantities.On the other hand its disadvantages are very high design time,high NRE costs,low flexibility and high unit cost for small quantities.

Thursday, February 5, 2009

General Purpose Processors

A general purpose processor is a programmable device that is suitable for a variety of applications.The most important feature of these processors is program memory,that means the designer of such processors does not know what program will run on that processor,so the program cannot be built into the digital circuit.Another important feature of these processors is general datapath that means the datapath must be general enough to handle a variety of computations,so such a datapath typically has a large register file and one or more general purpose arithmetic logic units(ALUs).An embedded system designer,however need not be concerned about the design of general purpose processor.An embedded system designer simply uses a general purpose processor,by programming the processor's memory to carry out the required functionality.Using a general purpose processor in an embedded system may result in several design benefits.

Wednesday, February 4, 2009

Understanding BIOS/CMOS

Though often interchangeably used, the two terms refer to different things. BIOS(Basic Input Output System) refers to asset of instructions that are critical for the functioning of the system .These instructions include information about the components connected to the motherboard, like the hard disk, RAM, and the configuration of the many onboard subsystems .CMOS (Complementary Metal Oxide Semiconductor) refers to the chip on which the BIOS instructions are stored. CMOS is the name of the technology behind the chip that stores the BIOS.Present-day CMOS is technically called EEPROM for Electronically Erasable Programmable Read-only Memory.These can be rewritten too, and this allows the BIOS instructions of a motherboard to be updated when required. The process of updating the BIOS instructions is called Flashing.

Monday, February 2, 2009

Flash Memory

Flash Memory is the type of memory which is non-volatile in nature which means that it does not need power to maintain the information stored in the chip and primarily used in memory cards and USB flash drives for generals storage of data.You can also use these devices for transfer of data between computers and other digital products.The main advantage of using a flash drive is that it can be electrically erased and reprogrammed.Now if talk about the internal components of a flash drive then flash drive consists of a Flash Memory chip that stores the data, a controller that manages the read/write operations, and a USB interface.Now a days flash memory is also available as in non integrated form that means it can be plugged into Memory Card readers that have a USB interface.If we take a look at the advantages of the flash drives then it offers you fast read access times and better kinetic shock resistance than hard disks.In addition to this, when the flash memory is packaged in a memory card then it offers you very high durability,able to withstand intense pressure, extremes of temperature, and immersion in water.