Welcome to 'featurecoding' computer and any job exam crack tutorial site. If you're new to computers or just want to update your skills, you've come to the right place. All the courses are aimed at complete beginners, so you don't need experience to get started.

 Computer architecture, like other architecture, is the art of determining the needs of the user of a structure and then designing to meet those needs as effectively as possible within economic and technological constraints. 


 Computer architecture involves instruction set architecture design, microarchitecture design, logic design , and implementation .


WHAT IS COMPUTER ARCHITECTURE ? 

 Computer architecture is a set of rules and methods that describe the functionality, organization, and implementation of computer systems. 


 Structure: static arrangement of the parts.


 Organization: dynamic interaction of the parts and their control. 


 Implementation: design of specific building blocks.


 Performance: behavioral study of the system or of some of its components.


WHY COMPUTER ARCHITECTURE ? 

 Because computer architecture is perhaps the most fundamental subject in computer science. Without computers, the field of computer science does not exist.


 Because computer architecture is perhaps the most fundamental subject in computer science. Without computers, the field of computer science does not exist. 


 Whatever we do, be it surfing the web, sending email, writing a document, is on top of computer architecture, or computers. The subject explores how machines are designed, built, and operate. 


 Knowing what's inside and how it works will help you design, develop, and implement applications better, faster, cheaper, more efficient, and easier to use because you will be able to make informed decisions instead of estimating and assuming. 


WHAT YOU WILL LEARN IN THIS COURSE 

 In this course, you will learn contemporary state of the art in computer hardware, including the internal working of nano-scale microprocessors to large-scale data center infrastructures and how to program manycore machines as well as a cluster of virtual/physical machines that power data centers which in turn enable cloud computing. 


 CATEGORIES OF THE COURSE CONTENT 

 Specifically, the course contents are organized into three categories: macro-scale motherboard, nano-scale microprocessor, and large-scale cluster of many core microprocessors. 


MACRO-SCALE TOPICS 

 Macro-scale topics includes motherboard organization, bus architecture, L1-L3 cache organization and their policies, SDRAM DDR1-4 memory and DIMM cards, storage and networking devices and interconnection networks including switch architecture. 


MICRO-SCALE TOPICS 

 Micro-scale topics explicate the internal working of microprocessors including integer and floating point arithmetic, pipelining concepts with dynamic instruction scheduling, branch prediction with speculation, hardware multithreading, exploiting instruction level parallelism, data level parallelism and thread level parallelism. 


LARGE-SCALE TOPICS 

 Large-scale topics involve many cores to many virtual/physical machines and their programming paradigms that power datacenters with adaptive elastic computing infrastructures or cloud computing. The technologies that power datacenters include virtualization of physical resources, virtual machine monitors, load balancing of virtual/physical machines with autonomous as well as scheduled virtual machine migration. 


HISTORY OF COMPUTER ARCHITECTURE 

 The first documented computer architecture was in the correspondence between Charles Babbage and Ada Lovelace, describing the analytical engine. 


 When building the computer Z1 in 1936, Konrad Zuse described in two patent applications for his future projects that machine instructions could be stored in the same storage used for data, i.e. the stored-program concept. 


 Two other early and important examples are:John von Neumann's 1945 paper, First Draft of a Report on the EDVAC, which described an organization of logical elements; and Alan Turing's more detailed Proposed Electronic Calculator for the Automatic Computing Engine, also 1945 and which cited John von Neumann's paper. 


THE TERM ARCHITECTURE 

 The term “architecture” in computer literature can be traced to the work of Lyle R. Johnson, Frederick P. Brooks, Jr., and Mohammad Usman Khan, all members of the Machine Organization department in IBM’s main research center in 1959. Johnson had the opportunity to write a proprietary research communication about the Stretch , an IBM-developed supercomputer for Los Alamos National Laboratory (at the time known as Los Alamos Scientific Laboratory). To describe the level of detail for discussing the luxuriously embellished computer, he noted that his description of formats, instruction types, hardware parameters, and speed enhancements were at the level of “system architecture” – a term that seemed more useful.


EARLIEST COMPUTER ARCHITECTURE 

 The earliest computer architectures were designed on paper and then directly built into the final hardware form. Later, computer architecture prototypes were physically built in the form of a transistor–transistor logic (TTL) computer—such as the prototypes of the 6800 and the PA-RISC — tested, and tweaked, before committing to the final hardware form. As of the 1990s, new computer architectures are typically "built", tested, and tweaked—inside some other computer architecture in a computer architecture simulator ; or inside a FPGA as a soft microprocessor ; or both—before committing to the final hardware form. 


THE DISCIPLINE OF COMPUTER ARCHITECTURE HAS THREE MAIN SUBCATEGORIES 

1. Instruction Set Architecture , or ISA. The ISA defines the machine code that a processor reads and acts upon as well as the word size , memory address modes , processor registers , and data type.

 2. Microarchitecture , or computer organization describes how a particular processor will implement the ISA.[14] The size of a computer's CPU cache for instance, is an issue that generally has nothing to do with the ISA. 

3. System Design includes all of the other hardware components within a computing system. These include:

     a. Data processing other than the CPU, such as direct memory access (DMA)

     b. Other issues such as virtualization , multiprocessing.


INSTRUCTION SET ARCHITECTURE 

 An instruction set architecture (ISA) is the interface between the computer's software and hardware and also can be viewed as the programmer's view of the machine. Computers do not understand high-level programming languages such as Java, C++, or most programming languages used. A processor only understands instructions encoded in some numerical fashion, usually as binary numbers. Software tools, such as compilers, translate those high level languages into instructions that the processor can understand. 


 ISAs vary in quality and completeness. A good ISA compromises between programmer convenience (how easy the code is to understand), size of the code (how much code is required to do a specific action), cost of the computer to interpret the instructions (more complexity means more hardware needed to decode and execute the instructions), and speed of the computer (with more complex decoding hardware comes longer decode time). Memory organization defines how instructions interact with the memory, and how memory interacts with itself.


COMPUTER ORGANIZATION 

  Computer organization helps optimize performancebased products. For example, software engineers need to know the processing power of processors. They may need to optimize software in order to gain the most performance for the lowest price. This can require quite detailed analysis of the computer's organization. For example, in a SD card, the designers might need to arrange the card so that the most data can be processed in the fastest possible way.


 Computer organization also helps plan the selection of a processor for a particular project. Multimedia projects may need very rapid data access, while virtual machines may need fast interrupts. Sometimes certain tasks need additional components as well. For example, a computer capable of running a virtual machine needs virtual memory hardware so that the memory of different virtual computers can be kept separated.


 IMPLEMENTATION 

 Once an instruction set and micro-architecture are designed, a practical machine must be developed. This design process is called the implementation . Implementation is usually not considered architectural design, but rather hardware design engineering . Implementation can be further broken down into several steps: 

            • Logic Implementation designs the circuits required at a logic gate levelCircuit Implementation does transistor -level designs of basic elements (gates, multiplexers, latches etc.) as well as of some larger blocks ( ALUs , caches etc.) that may be implemented at the log gate level, or even at the physical level if the design calls for it. 


            • Physical Implementation draws physical circuits. The different circuit components are placed in a chip floorplan or on a board and the wires connecting them are created. 


PERFORMANCE 

 Modern computer performance is often described in IPC (instructions per cycle). This measures the efficiency of the architecture at any clock frequency. Since a faster rate can make a faster computer, this is a useful measurement. Older computers had IPC counts as low as 0.1 instructions per cycle. Simple modern processors easily reach near . 


 Performance is affected by a very wide range of design choices — for example, pipelining a processor usually makes latency worse, but makes throughput better. Computers that control machinery usually need low interrupt latencies. These computers operate in a real-time environment and fail if an operation is not completed in a specified amount of time. For example, computer-controlled anti-lock brakes must begin braking within a predictable, short time after the brake pedal is sensed or else failure of the brake will occur. 


POWER EFFICIENCY 

 Power efficiency is another important measurement in modern computers. A higher power efficiency can often be traded for lower speed or higher cost. The typical measurement when referring to power consumption in computer architecture is MIPS/W (millions of instructions per second per watt).

 

 Modern circuits have less power required per transistor as the number of transistors per chip grows.  This is because each transistor that is put in a new chip requires its own power supply and requires new pathways to be built to power it. However the number of transistors per chip is starting to increase at a slower rate. Therefore, power efficiency is starting to become as important, if not more important than fitting more and more transistors into a single chip. Recent processor designs have shown this emphasis as they put more focus on power efficiency rather than cramming as many transistors into a single chip as possible. 

Computer Trick Videos

Random Posts

Translate

Contact Form

Name

Email *

Message *