Tuesday, March 5, 2013

Computer Organisation Notes



Computer Organisation Notes
OCR (optical character recognition) and OMR (optical mark recognition) are specialized systems that convert images on a paper to a format that is easily readable and processed by a computer. Both OCR and OMR technologies are comprised of hardware and software components. They function by reading images by scanner, which recognizes and deciphers them into an electronic form.
 OCR recognises letters, numerals, punctuation, and related communication symbols. Its main use is for getting printed text into a form that can be manipulated by a word processor or similar computer program. OCR- Optical Character Reader. It is used to convert written text to typed (digital) text which can be edited on the computer
OMR recognises the presence or absence of marks in pre-determined positions on a sheet of paper. It is what is used for automatically marking multiple choice exams, collating results from surveys and census etc. OMR- Optical Mark Reader. It is used to read marks off hard copies. for example multiple choice exam sheets.Basically, OCR reads characters, and OMR reads marks.
Advantages
·         OMR readers are able to input large amounts of data with minimal human intervention. They are reliable, cost-effective and fast, and they are able to handle huge volumes of data in mere minutes. An OMR scanner is able to read anywhere between 1,500 to 10,000 forms in an hour.
OCR readers can input large volumes of data into a digitalized form, which can be manipulated by a word processor. OCR systems generate fewer errors compared with manual data entry, therefore saving valuable time and costs that would otherwise be spent in purchasing error correction workstations. An OCR system is able to read 420 characters per second, with an accuracy rate of 98 percent.
Disadvantages
·         OMR readers are unable to recognize machine or hand-printed characters. It is unable to electronically retrieve data since images of the forms processed are not scanned. Marks need to be within a specified area if the form is to be processed accurately.
The accuracy of an OCR system depends on the readability or the original source. OCR systems are expensive, costly to maintain and require manual intervention


 An interface device (IDF) is a hardware component or system of components that allows a human being to interact with a computer, a telephone system, or other electronic information system. An interface device generally must include some form or forms of output interface, such as a display screen or audio signals, and some form or forms of input interface, such as buttons to push, a keyboard a voice receiver, or a handwriting tablet.
VECTORED INTERRUPT
When a processor is interrupted to do a particular task,Program counter should be loaded with the the address of subroutine(task).If the processor automatically generates the address then it is known as vectored interrupt.for example if 8085 µP is interrupted through RST 5.5 pin,then processor multiplies 5.5 by 8 and converts it to Hex address.If user has to provide address of subroutine using CALL instruction then it is known as non vectored interrupt


IEEE-488 is a short-range digital communications bus specification. It was created for use with automated  test equipment
IEEE-488 is an 8 bit, electrically parallel bus. The bus employs sixteen signal lines — eight used for bi-directional data transfer, three for handshake, and five for bus management — plus eight ground return lines.

Advantages

§  Simple hardware interface
§  Ease of connecting multiple device to a single host
§  Allows mixing of slow and fast devices
§  Well-established and mature, widely supported
§  Rugged connectors, held in place by screws, means cables can't easily be accidentally removed as they can with Firewire and USB

Disadvantages

§  Mechanically bulky connectors and cables
§  Limited speed and expansion
§  Lack of command protocol standards (before SCPI)
§  Implementation options (e.g. end of transmission handling) can complicate interoperability in pre-IEEE-488.2 devices
§  No mandatory galvanic isolation between bus and devices
§  High cost (compared to RS-232/USB/Firewire/Ethernet)
§  Limited availability (again compared to RS-232/USB/Firewire/Ethernet)
interupt nesting exception
Every interrupt or exception gives rise to a kernel control path or separate sequence of instructions that execute in Kernel Mode on behalf of the current process. For instance, when an I/O device raises an interrupt, the first instructions of the corresponding kernel control path are those that save the contents of the CPU registers in the Kernel Mode stack, while the last are those that restore the contents of the registers.
Kernel control paths may be arbitrarily nested; an interrupt handler may be interrupted by another interrupt handler, thus giving rise to a nested execution of kernel control paths ,


Linux design does not allow process switching while the CPU is executing a kernel
control path associated with an interrupt. However, such kernel control paths may be
arbitrarily nested; an interrupt handler may be interrupted by another interrupt handler, thus giving raise to a nested execution of kernel threads. We emphasize that the
current process doesn't change while the kernel is handling a nested set of kernel control
paths.
·         To improve the throughput of programmable interrupt controllers and device
controllers. Assume that a device controller issues a signal on an IRQ line: the PIC
transforms it into an external interrupt, and then both the PIC and the device
controller remain blocked until the PIC receives an acknowledgment from theCPU. Thanks to kernel control path interleaving, the kernel is able to send the
acknowledgment even when it is handling a previous interrupt.
• To implement an interrupt model without priority levels. Since each interrupt
handler may be deferred by another one, there is no need to establish predefined
priorities among hardware devices. This simplifies the kernel code and improves
its portability.
RS-232C is a long-established standard ("C" is the current version) that describes the physical interface and protocol for relatively low-speed serial data communication between computers and related devices. It was defined by an industry trade group, the Electronic Industries Association (EIA), originally for teletypewriter devices.
RS-232C is the interface that your computer uses to talk to and exchange data with your modem and other serial devices. Somewhere in your PC, typically on a Universal Asynchronous Receiver/Transmitter (UART) chip on your motherboard, the data from your computer is transmitted to an internal or external modem (or other serial device) from its Data Terminal Equipment (DTE) interface. Since data in your computer flows along parallel circuits and serial devices can handle only one bit at a time, the UART chip converts the groups of bits in parallel to a serial stream of bits. As your PC's DTE agent, it also communicates with the modem or other serial device, which, in accordance with the RS-232C standard, has a complementary interface called the Data Communications Equipment (DCE) interface.
In telecommunication RS-232 (Recommended Standard 232) is the traditional name for a series of standards for  serial binary single ended data and control signals connecting between a DTE (Data Terminal Equipment) and a DCE (Data Circuit-terminating Equipment). It is commonly used in computer serial ports. The standard defines the electrical characteristics and timing of signals, the meaning of signals, and the physical size and pinout of connectors
Electronic data communications between elements will generally fall into two broad categories: single-ended and differential. RS232 (single-ended) was introduced in 1962, and despite rumors for its early demise, has remained widely used through the industry. ecommended standard-232C, a standard interface approved by theElectronic Industries Alliance (EIA) for connecting serial devices.
RS232 data is bi-polar.... +3 TO +12 volts indicates an "ON or 0-state (SPACE) condition" while A -3 to -12 volts indicates an "OFF" 1-state (MARK) condition. An RS-232 port can supply only limited power to another device. The number of output lines, the type of interface driver IC, and the state of the output lines are important considerations.
Data is transmitted and received on pins 2 and 3 respectively. Data  Set Ready (DSR) is an indication from the Data Set (i.e., the modem or DSU/CSU) that it is on. Similarly, DTR indicates to the Data Set that the DTE is on. Data Carrier Detect (DCD) indicates that a good carrier is being received from the remote modem.
Pins 4 RTS (Request To Send - from the transmitting computer) and 5 CTS (Clear To Send - from the Data set) are used to control. In most Asynchronous situations, RTS and CTS are constantly on throughout the communication session. However where the DTE is connected to a multipoint line, RTS is used to turn carrier on the modem on and off. On a multipoint line, it's imperative that only one station is transmitting at a time (because they share the return phone pair). When a station wants to transmit, it raises RTS. The modem turns on carrier, typically waits a few milliseconds for carrier to stabilize, and then raises CTS. The DTE transmits when it sees CTS up. When the station has finished its transmission, it drops RTS and the modem drops CTS and carrier together.
Clock signals (pins 15, 17, & 24) are only used for synchronous communications. The modem or DSU extracts the clock from the data stream and provides a steady clock signal to the DTE. Note that the transmit and receive clock signals do not have to be the same, or even at the same baud rate.
CTS            Clear To Send [DCE --> DTE]
DCD           Data Carrier Detected (Tone from a modem) [DCE --> DTE]
DCE            Data Communications Equipment eg. modem
DSR            Data Set Ready [DCE --> DTE]
DSRS         Data Signal Rate Selector [DCE --> DTE] (Not commonly     
                    used)
DTE            Data Terminal Equipment eg. computer, printer
DTR            Data Terminal Ready [DTE --> DCE]
FG              Frame Ground (screen or chassis)
NC              No Connection
RCk            Receiver (external) Clock input
RI                Ring Indicator (ringing tone detected)
RTS            Request To Send [DTE --> DCE]
RxD            Received Data [DCE --> DTE]
SG              Signal Ground
SCTS         Secondary Clear To Send [DCE --> DTE]
SDCD        Secondary Data Carrier Detected (Tone from a modem)      
                    [DCE  --> DTE]
SRTS         Secondary Request To Send [DTE --> DCE]
SRxD         Secondary Received Data [DCE --> DTE]
STxD         Secondary Transmitted Data [DTE --> DCE]
TxD            Transmitted Data [DTE --> DCE]
 
Control units
A control unit in general is a central (or sometimes distributed but clearly distinguishable) part of the machinery that controls its operation, provided that a piece of machinery is complex and organized enough to contain any such unit. One domain in which the term is specifically used is the area of computer design.

Hardwired Control

At one time, control units for CPUs were ad-hoc logic, and they were difficult to design. These can be identified as the main part of the computer and the main device that helps the computer to function in an appropriate manner. It is constructed of logic gates, flip-flops, encoder circuits, decoder circuits, digital counters and other digital circuits. Their control is based on fixed architecture i.e. it requires changes in the wiring if the instruction set is modified or changed. This architecture is preferred in RISC computers as it consists of a lesser instruction set.
Hardwired control units are implemented through use of sequential logic units, featuring a finite number of gates that can act as a generator of specific results, based on the instructions that were used to invoke those responses. These instructions are apparent in the design of the architecture, but can also be represented in other ways.

Microprogram Control Unit

The idea of microprogramming was introduced by M. V. Wilkes in 1951 as an intermediate level to execute computer program instructions (see also: microcode). Microprograms were organized as a sequence of microinstructions and stored in special control memory. The algorithm for the microprogram control unit is usually specified by flow-chart description.[1] The main advantage of the microprogram control unit is the simplicity of its structure. Outputs of the controller are organized in microinstructions and they can be easily replaced

Microprogram Control Unit

The idea of microprogramming was introduced by M. V. Wilkes in 1951 as an intermediate level to execute computer program instructions (see also: microcode). Microprograms were organized as a sequence of microinstructions and stored in special control memory. The algorithm for the microprogram control unit is usually specified by flow-chart description.[1] The main advantage of the microprogram control unit is the simplicity of its structure. Outputs of the controller are organized in microinstructions and they can be easily replaced

When designing a new microprocessor or microcontroller unit, there are a few general steps that can be followed to make the process flow more logically. These few steps can be further sub-divided into smaller tasks that can be tackled more easily. The general steps to designing a new microprocessor are:
1.     Determine the capabilities the new processor should have.
2.     Lay out the datapath to handle the necessary capabilities.
3.     Define the machine code instruction format (ISA).
4.     Construct the necessary logic to control the datapath.
Lay out the basic arithmetic operations you want your chip to have:
§  Addition/Subtraction
§  Multiplication
§  Division
§  Shifting and Rotating
§  Logical Operations: AND, OR, XOR, NOR, NOT, etc.
List other capabilities that your machine has:
§  Unconditional jumps
§  Conditional Jumps (and what conditions?)
§  Stack operations (Push, pop)
§  Design the Datapath
Right off the bat we need to determine what ALU architecture that our processor will use:
§  Accumulator
§  Stack
§  Register
§  A combination of the above 3
§  Create ISA
Once we have our basic datapath, we can start to design our ISA. There are a few things that we need to consider:
1.     Is this processor RISC, CISC, or VLIW?
2.     How long is a machine word?
3.     How do you deal with immediate values? What kinds of instructions can accept immediate values?
Instruction Set Design:
The early CISC years focused on making instruction sets that expert assembly language programmers enjoyed programming -- "code density" was a common metric.

the early RISC years focused on making instruction sets that ran a few benchmark programs in C, when compiled with relatively primitive compilers, really, really fast -- "cycles per instruction", and later "instructions per cycle" was recognized as an important part of achieving low "time to run the benchmark".
The rise of multitasking operating systems (and shared-memory parallel processors) lead to the discovery of non-blocking synchronization and the instructions necessary to support it.
CPUs dedicated to a single application (ASICs or FPGAs) led to the idea of customizing the CPU for one particular application
The rise of viruses and other malware led to the recognition of the Popek and Goldberg virtualization requirements.
Build Control Logic :
Once we have our datapath and our ISA, we can start to construct the logic of our primary control unit. These units are typically implemented as a finite state machine, and we can try to map the ISA to the control unit in a logical way.
Verify the design
People who design a CPU often spend more time on functional verification than all other steps combined.

No comments: