Thursday, August 13, 2015

Bluetooth

Bluetooth is a wireless technology standard for exchanging data over short distances (using short-wavelength UHF radio waves in the ISM band from 2.4 to 2.485 GHz from fixed and mobile devices, and building personal area networks (PANs). Invented by telecom vendor Ericsson in 1994,] it was originally conceived as a wireless alternative to RS-232 data cables. It can connect several devices, overcoming problems of synchronization.
Bluetooth is managed by the Bluetooth Special Interest Group (SIG), which has more than 25,000 member companies in the areas of telecommunication, computing, networking, and consumer electronics.] The IEEE standardized Bluetooth as IEEE 802.15.1, but no longer maintains the standard. The Bluetooth SIG oversees development of the specification, manages the qualification program, and protects the trademarks A manufacturer must make a device meet Bluetooth SIG standards to market it as a Bluetooth device A network of patents apply to the technology, which are licensed to individual qualifying devices.

*Name and logo

The name "Bluetooth" is an Anglicised version of the Scandinavian Blåtand/Blåtann, (Old Norse blátǫnn) the epithet of the tenth-century king Harald Bluetooth who united dissonant Danish tribes into a single kingdom and, according to legend, introduced Christianity as well. The idea of this name was proposed in 1997 by Jim Kardach who developed a system that would allow mobile phones to communicate with computers. At the time of this proposal he was reading Frans G. Bengtsson's historical novel The Long Ships about Vikings and King Harald Bluetooth. The implication is that Bluetooth does the same with communications protocols, uniting them into one universal standard.[11][12][13]
The Bluetooth logo is a bind rune merging the Younger Futhark runes Runic letter ior.svg (Hagall) (ᚼ) and Runic letter berkanan.svg (Bjarkan) (ᛒ), Harald's initials.
Implementation

Bluetooth operates at frequencies between 2400 and 2483.5 MHz (including guard bands of 2 MHz at the bottom end and 3.5 MHz at the top). This is in the globally unlicensed (but not unregulated) Industrial, Scientific and Medical (ISM) 2.4 GHz short-range radio frequency band. Bluetooth uses a radio technology called frequency-hopping spread spectrum. Bluetooth divides transmitted data into packets, and transmits each packet on one of 79 designated Bluetooth channels. Each channel has a bandwidth of 1 MHz. Bluetooth 4.0 uses 2 MHz spacing, which accommodates 40 channels. The first channel starts at 2402 MHz and continues up to 2480 MHz in 1 MHz steps. It usually performs 1600 hops per second, with Adaptive Frequency-Hopping (AFH) enabled.
Originally, Gaussian frequency-shift keying (GFSK) modulation was the only modulation scheme available. Since the introduction of Bluetooth 2.0+EDR, π/4-DQPSK (Differential Quadrature Phase Shift Keying) and 8DPSK modulation may also be used between compatible devices. Devices functioning with GFSK are said to be operating in basic rate (BR) mode where an instantaneous data rate of 1 Mbit/s is possible. The term Enhanced Data Rate (EDR) is used to describe π/4-DPSK and 8DPSK schemes, each giving 2 and 3 Mbit/s respectively. The combination of these (BR and EDR) modes in Bluetooth radio technology is classified as a "BR/EDR radio".
Bluetooth is a packet-based protocol with a master-slave structure. One master may communicate with up to seven slaves in a piconet. All devices share the master's clock. Packet exchange is based on the basic clock, defined by the master, which ticks at 312.5 µs intervals. Two clock ticks make up a slot of 625 µs, and two slots make up a slot pair of 1250 µs. In the simple case of single-slot packets the master transmits in even slots and receives in odd slots. The slave, conversely, receives in even slots and transmits in odd slots. Packets may be 1, 3 or 5 slots long, but in all cases the master's transmission begins in even slots and the slave's in odd slots.
The above is valid for "classic" BT. Bluetooth Low Energy, introduced in the 4.0 specification, uses the same spectrum but somewhat differently; see Bluetooth low energy#Radio interface.
Communication and connection

A master Bluetooth device can communicate with a maximum of seven devices in a piconet (an ad-hoc computer network using Bluetooth technology), though not all devices reach this maximum. The devices can switch roles, by agreement, and the slave can become the master (for example, a headset initiating a connection to a phone necessarily begins as master—as initiator of the connection—but may subsequently operate as slave).
The Bluetooth Core Specification provides for the connection of two or more piconets to form a scatternet, in which certain devices simultaneously play the master role in one piconet and the slave role in another.
At any given time, data can be transferred between the master and one other device (except for the little-used broadcast mode.[citation needed]) The master chooses which slave device to address; typically, it switches rapidly from one device to another in a round-robin fashion. Since it is the master that chooses which slave to address, whereas a slave is (in theory) supposed to listen in each receive slot, being a master is a lighter burden than being a slave. Being a master of seven slaves is possible; being a slave of more than one master is difficult.[citation needed] The specification is vague as to required behavior in scatternets.
Many USB Bluetooth adapters or "dongles" are available, some of which also include an IrDA adapter.[citation needed]
Uses

th is a standard wire-replacement communications protocol primarily designed for low-power consumption, with a short range based on low-cost transceiver microchips in each device. Because the devices use a radio (broadcast) communications system, they do not have to be in visual line of sight of each other, however a quasi optical wireless path must be viable. Range is power-class-dependent, but effective ranges vary in practice; see the table on the right.
Officially Class 3 radios have a range of up to 1 metre (3 ft), Class 2, most commonly found in mobile devices, 10 metres (33 ft), and Class 1, primarily for industrial use cases,100 metres (300 ft)] Bluetooth Marketing qualifies that Class 1 range is in most cases 20–30 metres (66–98 ft), and Class 2 range 5–10 metres (16–33 ft).
Version
Data rate
Max. application throughput
1.2
1 Mbit/s
>80 kbit/s
2.0 + EDR
3 Mbit/s
>80 kbit/s
3.0 + HS
24 Mbit/s
See Version 3.0 + HS
4.0
24 Mbit/s
See Version 4.0 LE
The effective range varies due to propagation conditions, material coverage, production sample variations, antenna configurations and battery conditions. Most Bluetooth applications are for indoor conditions, where attenuation of walls and signal fading due to signal reflections make the range far lower than specified line-of-sight ranges of the Bluetooth products. Most Bluetooth applications are battery powered Class 2 devices, with little difference in range whether the other end of the link is a Class 1 or Class 2 device as the lower powered device tends to set the range limit. In some cases the effective range of the data link can be extended when a Class 2 device is connecting to a Class 1 transceiver with both higher sensitivity and transmission power than a typical Class 2 deviceMostly however the Class 1 devices have a similar sensitivity to Class 2 devices. Connecting two Class 1 devices with both high sensitivity and high power can allow ranges far in excess of the typical 100m, depending on the throughput required by the application. Some such devices allow open field ranges of up to 1 km and beyond between two similar devices without exceeding legal emission limits.
The Bluetooth Core Specification mandates a range of not less than 10 metres (33 ft), but there is no upper limit on actual range. Manufacturers' implementations can be tuned to provide the range needed for each case.

*Bluetooth profiles
To use Bluetooth wireless technology, a device must be able to interpret certain Bluetooth profiles, which are definitions of possible applications and specify general behaviours that Bluetooth-enabled devices use to communicate with other Bluetooth devices. These profiles include settings to parametrize and to control the communication from start. Adherence to profiles saves the time for transmitting the parameters anew before the bi-directional link becomes effective. There are a wide range of Bluetooth profiles that describe many different types of applications or use cases for devices.

*List of applications
A typical Bluetooth mobile phone headset.
Wireless control of and communication between a mobile phone and a handsfree headset. This was one of the earliest applications to become popular.
Wireless control of and communication between a mobile phone and a Bluetooth compatible car stereo system.
Wireless control of and communication with tablets and speakers such as iOS and Android devices.
Wireless Bluetooth headset and Intercom. Idiomatically, a headset is sometimes called "a Bluetooth".
Wireless streaming of audio to headphones with] or without[] communication capabilities.
Wireless networking between PCs in a confined space and where little bandwidth is required
Wireless communication with PC input and output devices, the most common being the mouse, keyboard and printer.
Transfer of files, contact details, calendar appointments, and reminders between devices with OBEX.
Replacement of previous wired RS-232 serial communications in test equipment, GPS receivers, medical equipment, bar code scanners, and traffic control devices.
For controls where infrared was often used.
For low bandwidth applications where higher USB bandwidth is not required and cable-free connection desired.
Sending small advertisements from Bluetooth-enabled advertising hoardings to other, discoverable, Bluetooth devices.
Wireless bridge between two Industrial Ethernet (e.g., PROFINET) networks.
Three seventh and eighth generation game consoles, Nintendo's Wiiand Sony's PlayStation 3, use Bluetooth for their respective wireless controllers.
Dial-up internet access on personal computers or PDAs using a data-capable mobile phone as a wireless modem.
Short range transmission of health sensor data from medical devices to mobile phone, set-top box or dedicated telehealth devices.
Allowing a DECT phone to ring and answer calls on behalf of a nearby mobile phone.
Real-time location systems (RTLS), are used to track and identify the location of objects in real-time using “Nodes” or “tags” attached to, or embedded in the objects tracked, and “Readers” that receive and process the wireless signals from these tags to determine their locations.
Personal security application on mobile phones for prevention of theft or loss of items. The protected item has a Bluetooth marker (e.g., a tag) that is in constant communication with the phone. If the connection is broken (the marker is out of range of the phone) then an alarm is raised. This can also be used as a man overboard alarm. A product using this technology has been available since 2009.
Calgary, Alberta, Canada's Roads Traffic division uses data collected from travelers

Wednesday, August 12, 2015

History Of Computer Tablet



The tablet computer and its associated operating system began with the development of pen computing.[14] Electrical devices with data input and output on a flat information display existed as early as 1888 with the telautograph which used a sheet of paper as display and a pen attached to electromechanical actuators. Throughout the 20th century devices with these characteristics have been imagined and created whether as blueprints, prototypes, or commercial products. In addition to many academic and research systems, several companies released commercial products in the 1980s, with various input/output types tried out:

1,Fictional and prototype tablets
Tablet computers appeared in a number of works of science fiction in the second half of the 20th century, with the depiction of Arthur C. Clarke's NewsPad,in Stanley Kubrick's 1968 film 2001: A Space Odyssey, the description of Calculator Pad in the 1951 novel Foundation by Isaac Asimov, the Opton in the 1961 novel Return from the Stars by Stanislaw Lem, The Hitchhiker's Guide to the Galaxy in Douglas Adams's 1978 comedy of the same name, and the numerous devices depicted in Gene Roddenberry 1966 Star Trek series, all helping to promote and disseminate the concept to a wider audience. A device more powerful than today's tablets appeared briefly in Jerry Pournelle and Larry Niven's 1974 The Mote in God's Eye.

In 1968, computer scientist Alan Kay envisioned a KiddiComp, while a PhD candidate he developed and described the concept as a Dynabook in his 1972 proposal: A personal computer for children of all ages] the paper outlines the requirements for a conceptual portable educational device that would offer functionality similar to that supplied via a laptop computer or (in some of its other incarnations) a tablet or slate computer with the exception of the requirement for any Dynabook device offering near eternal battery life. Adults could also use a Dynabook, but the target audience was children.

The sci-fi TV series Star Trek The Next Generation featured tablet computers which were designated as PADDs

In 1994, the European Union initiated the NewsPad project, inspired by Clarke and Kubrick's fictional work.[23] Acorn Computers developed and delivered an ARM-based touch screen tablet computer for this program, branding it the "NewsPad"; the project ended in 1997.
During the November 2000 COMDEX, Microsoft used the term Tablet PC to describe a prototype handheld device they were demonstrating.
In 2001, Ericsson Mobile Communications announced an experimental product named the DelphiPad which was developed in cooperation with the Centre for Wireless Communications in Singapore, with touch-sensitive screen, Netscape Navigator as web browser and Linux as its operating system.
Following their earlier tablet-computer products such as the Pencept PenPad[30][31] and the CIC Handwriter,n September 1989, GRiD Systems release the first commercially available tablet-type portable computer, the GRiDPad. All three products were based on extended versions of the MS-DOS operating system.

In 1991, AT&T released their first EO Personal Communicator, this was one of the first commercially available tablets and ran the GO Corporation's PenPoint OS on AT&T's own hardware, including their own AT&T Hobbit CPU.

In 1992, Atari showed the Stylus, later renamed to ST-Pad prototype to developers, this one was based on the TOS/GEM Atari ST Platform and included already an early handwriting recognition. Shiraz Shivji's company Momentus demonstrated in the same time a failed x86 MS-DOS based Pen Computer with its own GUI

Apple Computers launched the Apple Newton personal digital assistant in 1993. It utilised Apple's own new Newton OS, initially running on hardware manufactured by Motorola and incorporating an ARM CPU, that Apple had specifically co-developed with Acorn Computers. The operating system and platform design were later licensed to Sharp and Digital Ocean, who went on to manufacture their own variants.

In 1996, Palm, Inc. released the first of the Palm OS based PalmPilot touch and stylus based PDA, the touch based devices initially incorporating a Motorola Dragonball (68000) CPU.

Intel announced a StrongARM processor-based touchscreen tablet computer in 1999, under the name WebPAD. It was later re-branded as the "Intel Web Tablet"

In 2000, Norwegian company Screen Media AS and the German company Dosch & Amand Gmbh released the " FreePad". It was based on Linux and used the Opera browser. The Internet access was provided by DECT DMAP, only available in Europe and provided up to 10Mbit/s wireless access. The device had 16 MB storage, 32 MB of RAM and x86 compatible 166 MHz "Geode"-Microcontroller by National Semiconductor.The screen was 10.4" or 12.1" and was touch sensitive. It had slots for SIM cards to enable support of television set-up box. FreePad were sold in Norway and the Middle East; but the company was dissolved in 2003.

In April 2000, Microsoft launched the Pocket PC 2000, utilizing their touch capable Windows CE 3.0 operating system.The devices were manufactured by several manufacturers, based on a mix of: x86, MIPS, ARM, and SuperH hardware.

In 2002, Microsoft attempted to define the Microsoft Tablet PC[40] as a mobile computer for field work in business,[41] though their devices failed, mainly due to pricing and usability decisions that limited them to their original purpose - such as the existing devices being too heavy to be held with one hand for extended periods, and having legacy applications created for desktop interfaces and not well adapted to the slate format.

Nokia had plans for an internet tablet since before 2000. An early model was test manufactured in 2001, the Nokia M510, which was running on EPOC and featuring an Opera browser, speakers and a 10-inch 800×600 screen, but it was not released because of fears that the market was not ready for it.[43] In 2005, Nokia finally released the first of its Internet Tablet range, the Nokia 770. These tablets now ran a Debian based Linux OS called Maemo. Nokia used the term internet tablet to refer to a portable information appliance that focused on Internet use and media consumption, in the range between a personal digital assistant (PDA) and an Ultra-Mobile PC (UMPC). They made two mobile phones, the N900 that runs Maemo, and N9 that run Meego

Android was the first of today's dominating platforms for tablet computers to reach the market. In 2008, the first plans for Android-based tablets appeared. The first products were released in 2009. Among them was the Archos 5, a pocket-sized model with a 5-inch touchscreen, that was first released with a proprietary operating system and later (in 2009) released with Android 1.4. The Camangi WebStation was released in Q2 2009. The first LTE Android tablet appeared late 2009 and was made by ICD for Verizon. This unit was called the Ultra, but a version called Vega was released around the same time. Ultra had a 7-inch display while Vega's was 15 inches. Many more products followed in 2010. Several manufacturers waited for Android Honeycomb, specifically adapted for use with tablets, which debuted in February 2011.
2010 and afterwards


Apple is often credited for defining a new class of consumer device with the iPad, which shaped the commercial market for tablets in the following years, and was the most successful tablet at the time of its release. iPads and competing devices have been tested by the US military Its debut in 2010 pushed tablets into the mainstream. Samsung's Galaxy Tab and others followed, continuing the trends towards the features listed above.

In 2013, Samsung announced a tablet running Android and Windows 8 operating systems concurrently; switching from one operating system to the other and vice versa does not require restarting the device, and data can be synchronized between the two operating systems.[The device, named ATIV Q, was scheduled for release in late 2013 but its release has been indefinitely delayed. Meanwhile, Asus released its Transformer Book Trio, a tablet that is also capable of running the operating systems Windows 8 and Android.
By 2014 around 23% of B2B companies were said to have deployed tablets for sales-related activities, according to a survey report by Corporate Visions.
Touch interface
3.Samsung Galaxy Tab demonstrating multi-touch
A key component among tablet computers is touch input. This allows the user to navigate easily and type with a virtual keyboard on the screen. The first tablet to do this was the GRiDPad by GRiD Systems Corporation; the tablet featured both a stylus, a pen-like tool to aid with precision in a touchscreen device as well as an on-screen keyboard.
The system must respond to touches rather than clicks of a keyboard or mouse, which allows integrated hand-eye operation, a natural use of the somatosensory system.[This is even more true of the more recent multi-touch interface, which often emulates the way objects behave.
Handwriting recognition
Chinese characters like this one meaning "person" can be written by handwriting recognition (人 animation, Mandarin: rén, Korean: in, Japanese: jin, nin; hito, Cantonese: jan4). The character has two strokes, the first shown here in brown, and the second in red. The black area represents the starting position of the writing instrument.

All versions of the Windows OS since Vista have natively supported advanced handwriting recognition, including via a digital stylus.] Windows XP supported handwriting with optional downloads from MS. The Windows handwriting recognition routines constantly analyze the user's handwriting to improve performance. Handwriting recognition is also supported in many applications such as Microsoft OneNote, and Windows Journal. Some ARM powered tablets, such as the Galaxy Note 10, also support a stylus and support handwriting recognition. Wacom and N-trig digital pens provide approximately 2500 DPI resolution for handwriting, exceeding the resolution of capacitive touch screens by more than a factor of 10. These pens also support pressure sensitivity, allowing for "variable-width stroke-based" characters, such as Chinese/Japanese/Korean writing, due to their built-in capability of "pressure sensing". Pressure is also used in digital art applications such as Autodesk Sketchbook.
4.Touchscreen hardware

Touchscreens usually come in one of two forms:

    Resistive touchscreens are passive and respond to pressure on the screen. They allow a high level of precision, useful in emulating a pointer (as is common in tablet computers) but may require calibration. Because of the high resolution, a stylus or fingernail is often used. Stylus-oriented systems are less suited to multi-touch.
    Capacitive touchscreens tend to be less accurate, but more responsive than resistive devices. Because they require a conductive material, such as a finger tip, for input, they are not common among stylus-oriented devices, but are prominent on consumer devices. Finger-driven capacitive screens do not currently support pressure input.

Some tablets can recognize individual palms, while some professional-grade tablets use pressure-sensitive films, such as those on graphics tablets. Some capacitive touch-screens can detect the size of the touched area and the pressure used.
Features

Today's tablets use capacitive touchscreens with multi-touch, unlike earlier stylus-driven resistive touchscreen devices. After 2007 with the access to capacitive screens and the success of the iPhone, multi-touch and other natural user interface features, as well as flash memory solid state storage and "instant on" warm-booting; external USB and Bluetooth keyboards defined tablets. Some have 3G mobile telephony applications.

Most tablets released since mid-2010 use a version of an ARM processor for longer battery life. The ARM Cortex family is powerful enough for tasks such as internet browsing, light production work and mobile games.

As with smartphones, most mobile tablet apps are supplied through online distribution, rather than boxed software or direct sales from software vendors. These sources, known as "app stores", provide centralized catalogs of software and allow "one click" on-device software purchasing, installation and updates. The app store is often shared with smartphones that use the same operating system.[64][65]

*Hardware

    High-definition, anti-glare display
    Wireless local area and internet connectivity (usually with Wi-Fi standard and optional mobile broadband)
    Front- and/or back- facing camera(s) for photographs and video
    Lower weight and longer battery life than a comparably-sized laptop
    Bluetooth for connecting peripherals and communicating with local devices
    Early devices had IR support and could work as a TV remote controller.
    Docking station: Keyboard and USB port(s)

Special hardware: The tablets can be equipped with special hardware to provide functionality, such as camera, GPS and local data storage.

*Software

    Mobile web browser
    E-book readers for digital books, periodicals and other content
    Downloadable apps such as games, education and utilities
    Portable media player function including video and music playback
    Email and social media
    Mobile phone functions (messaging, speakerphone, address book)
    Video-teleconferencing

*Data storage

    On-board flash memory
    Ports for removable storage
    Various cloud storage services for backup and syncing data across devices
    Local storage on a LAN

*Additional inputs
Besides a touchscreen and keyboard, some tablets can also use these input methods:

    Accelerometer: Detects the physical movement and orientation of the tablet. This allows the touchscreen display to shift to either portrait or landscape mode. In addition, tilting the tablet may be used as an input (for instance to steer in a driving game)
    Ambient light and proximity sensors, to detect if the device is close to something, in particular, to your ear, etc., which help to distinguish between intentional and unintentional touches.
    Speech recognition
    Gesture recognition
    Character recognition to write text on the tablet, that can be stored as any other text in the intended storage, instead of using a keyboard.
    Near field communication with other compatible devices including ISO/IEC 14443 RFID tags.

Applied Computer Science

Applied computer science aims at identifying certain computer science concepts that can be used directly in solving real world problems.
Artificial intelligence


This branch of computer science aims to or is required to synthesise goal-orientated processes such as problem-solving, decision-making, environmental adaptation, learning and communication found in humans and animals. From its origins in cybernetics and in the Dartmouth Conference (1956), artificial intelligence (AI) research has been necessarily cross-disciplinary, drawing on areas of expertise such as applied mathematics, symbolic logic, semiotics, electrical engineering, philosophy of mind, neurophysiology, and social intelligence. AI is associated in the popular mind with robotic development, but the main field of practical application has been as an embedded component in areas of software development, which require computational understanding. The starting-point in the late 1940s was Alan Turing's question "Can computers think?", and the question remains effectively unanswered although the "Turing test" is still used to assess computer output on the scale of human intelligence. But the automation of evaluative and predictive tasks has been increasingly successful as a substitute for human monitoring and intervention in domains of computer application involving complex real-world data.
Nicolas P. Rougier's rendering of the human brain.png
Human eye, rendered from Eye.png
Corner.png
Machine learning
Computer vision
Image processing
KnnClassification.svg
Julia iteration data.png
Sky.png
Pattern recognition
Data mining
Evolutionary computation
Neuron.svg
English.png
HONDA ASIMO.jpg
Knowledge representation
Natural language processing
Robotics

1.Computer architecture and engineering
Computer architecture, or digital computer organization, is the conceptual design and fundamental operational structure of a computer system. It focuses largely on the way by which the central processing unit performs internally and accesses addresses in memory.] The field often involves disciplines of computer engineering and electrical engineering, selecting and interconnecting hardware components to create computers that meet functional, performance, and cost goals.
NOR ANSI.svg
Fivestagespipeline.png
SIMD.svg
Digital logic
Microarchitecture
Multiprocessing
Roomba original.jpg
Flowchart.png
Operating system placement.svg
Ubiquitous computing
Systems architecture
Operating systems

2.Computer performance analysis
Computer performance analysis is the study of work flowing through computers with the general goals of improving throughput, controlling response time, using resources efficiently, eliminating bottlenecks, and predicting performance under anticipated peak loads.

3.Computer graphics and visualization
mputer science)Computer graphics is the study of digital visual contents, and involves synthese and manipulations of image data. The study is connected to many other fields in computer science, including computer vision, image processing, and computational geometry, and is heavily applied in the fields of special effects and video games.

4.Computer security and cryptography
Computer security is a branch of computer technology, whose objective includes protection of information from unauthorized access, disruption, or modification while maintaining the accessibility and usability of the system for its intended users. Cryptography is the practice and study of hiding (encryption) and therefore deciphering (decryption) information. Modern cryptography is largely related to computer science, for many encryption and decryption algorithms are based on their computational complexity.

5.Computational science
Computational science (or scientific computing) is the field of study concerned with constructing mathematical models and quantitative analysis techniques and using computers to analyze and solve scientific problems. In practical use, it is typically the application of computer simulation and other forms of computation to problems in various scientific disciplines.
Lorenz attractor yb.svg
Quark wiki.jpg
Naphthalene-3D-balls.png
1u04-argonaute.png
Numerical analysis
Computational physics
Computational chemistry
Bioinformatics
6.Computer networks
This branch of computer science aims to manage networks between computers worldwide.
Concurrent, parallel and distributed systems

7.Databases
A database is intended to organize, store, and retrieve large amounts of data easily. Digital databases are managed using database management systems to store, create, maintain, and search data, through database models and query languages.

8.Software engineering
Software engineering is the study of designing, implementing, and modifying software in order to ensure it is of high quality, affordable, maintainable, and fast to build. It is a systematic approach to software design, involving the application of engineering practices to software. Software engineering deals with the organizing and analyzing of software— it doesn't just deal with the creation or manufacture of new software, but its internal maintenance and arrangement. Both computer applications software engineers and computer systems software engineers are projected to be among the fastest growing occupations from 2008 and 2018.

History of Computer

of computer science that emerged in the 20th century, and was hinted at in the centuries prior.The progression, from mechanical inventions and mathematical theories towards modern computer concepts and machines, led to a major academic field and the basis of a massive worldwide industry.
The earliest known tool for use in computation was the abacus, developed in the period between 2700–2300 BCE in Sumer . The Sumerians' abacus consisted of a table of successive columns which delimited the successive orders of magnitude of their sexagesimal number system.11 Its original style of usage was by lines drawn in sand with pebbles . Abaci of a more modern design are still used as calculation tools today.
The Antikythera mechanism is believed to be the earliest known mechanical analog computer. It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to c. 100 BCE. Technological artifacts of similar complexity did not reappear until the 14th century, when mechanical astronomical clocks appeared in Europe.
When John Napier discovered logarithms for computational purposes in the early 17th century,[citation needed] there followed a period of considerable progress by inventors and scientists in making calculating tools. In 1623 Wilhelm Schickard designed a calculating machine, but abandoned the project, when the prototype he had started building was destroyed by a fire in 1624 . Around 1640, Blaise Pascal, a leading French mathematician, constructed a mechanical adding device based on a design described by Greek mathematician Hero of Alexandria. Then in 1672 Gottfried Wilhelm Leibnitz invented the Stepped Reckoner which he completed in 1694.

In 1837 Charles Babbage first described his Analytical Engine which is accepted as the first design for a modern computer. The analytical engine had expandable memory, an arithmetic unit, and logic processing capabilities able to interpret a programming language with loops and conditional branching. Although never built, the design has been studied extensively and is understood to be Turing equivalent. The analytical engine would have had a memory capacity of less than 1 kilobyte of memory and a clock speed of less than 10 Hertz .

Considerable advancement in mathematics and electronics theory was required before the first modern computers could be designed.


Intel

Intel Corporation (commonly referred to as Intel) is an American multinational technology company headquartered in Santa Clara, California. Intel is one of the world's largest and highest valued semiconductor chip makers, based on revenueIt is the inventor of the x86 series of microprocessors, the processors found in most personal computers.

Intel Corporation was founded on July 18, 1968. Intel also makes motherboard chipsets, network interface controllers and integrated circuits, flash memory, graphics chips, embedded processors and other devices related to communications and computing. Founded by semiconductor pioneers Robert Noyce and Gordon Moore and widely associated with the executive leadership and vision of Andrew Grove, Intel combines advanced chip design capability with a leading-edge manufacturing capability. Though Intel was originally known primarily to engineers and technologists, its "Intel Inside" advertising campaign of the 1990s made it a household name, along with its Pentium processors.

Intel was an early developer of SRAM and DRAM memory chips, and this represented the majority of its business until 1981. Although Intel created the world's first commercial microprocessor chip in 1971, it was not until the success of the personal computer (PC) that this became its primary business. During the 1990s, Intel invested heavily in new microprocessor designs fostering the rapid growth of the computer industry. During this period Intel became the dominant supplier of microprocessors for PCs, and was known for aggressive and sometimes illegal tactics in defense of its market position, particularly against Advanced Micro Devices (AMD), as well as a struggle with Microsoft for control over the direction of the PC industry.

The 2013 rankings of the world's 100 most valuable brands published by Millward Brown Optimor showed the company's brand value at number 61

Intel has also begun research into electrical transmission and generation. Intel has recently introduced a 3-D transistor that improves performance and energy efficiency. Intel has begun mass-producing this 3-D transistor, named the Tri-Gate transistor, with their 22 nm process, which is currently used in their 3rd generation core processors initially released on April 29, 2011 In 2011, SpectraWatt Inc., a solar cell spinoff of Intel, filed for bankruptcy under Chapter 11. In June 2013, Intel unveiled its fourth generation of Intel Core processors (Haswell) in an event named Computex in Taipei.
The Open Source Technology Center at Intel hosts PowerTOP and LatencyTOP, and supports other open-source projects such as Wayland, Intel Array Building Blocks, Threading Building Blocks (TBB), and Xen.

Intel is a portmanteau of the words integrated and electronics. The fact that "intel" is the term for intelligence information also made the name appropriate



History of automobile

The early history of the automobile can be divided into a number of eras, based on the prevalent means of propulsion. Later periods were defined by trends in exterior styling, size, and utility preferences.

In 1768 the first steam powered auto-mobile capable of human transportation was built by Nicolas-Joseph Cugnot.

In 1807, François Isaac de Rivaz designed the first car powered by an internal combustion engine fueled by hydrogen.

In 1886 the first petrol or gasoline powered auto-mobile the Benz Patent-Motorwagen was invented by Karl BenzThis is also considered to be the first "production" vehicle as Benz made several identical copies.

At the turn of the 20th century electrically powered auto-mobiles appeared but only occupied a niche market until the turn of the 21st century.

Contents

    1 Eras of invention
        1.1 Early automobiles
            1.1.1 Steam-powered wheeled vehicles
                1.1.1.1 17th and 18th centuries
                1.1.1.2 19th century
                1.1.1.3 20th and 21st centuries
            1.1.2 Electric automobiles
            1.1.3 Internal combustion engines
        1.2 Veteran era
            1.2.1 Brass or Edwardian era
        1.3 Vintage era
        1.4 Pre-war era
        1.5 Post-war era
        1.6 Modern era
    2 See also
    3 References
    4 Further reading
    5 External links

Eras of invention
Early automobiles
Steam-powered wheeled vehicles
Main article: History of steam road vehicles
17th and 18th centuries

Ferdinand Verbiest, a member of a Jesuit mission in China, built the first steam-powered vehicle around 1672 as a toy for the Chinese Emperor. It was of small enough scale that it could not carry a driver but it was, quite possibly the first working steam-powered vehicle ('auto-mobile')
A replica of Richard Trevithick's 1801 road locomotive 'Puffing Devil'

Steam-powered self-propelled vehicles large enough to transport people and cargo were first devised in the late 18th century. Nicolas-Joseph Cugnot demonstrated his fardier à vapeur ("steam dray"), an experimental steam-driven artillery tractor, in 1770 and 1771. As Cugnot's design proved to be impractical, his invention was not developed in his native France. The center of innovation shifted to Great Britain. By 1784, William Murdoch had built a working model of a steam carriage in Redruth.The first automobile patent in the United States was granted to Oliver Evans in 1789, and in 1801 Richard Trevithick was running a full-sized vehicle on the roads in Camborne.
19th century

Many vehicles were in vogue for a time, and over the next decades such innovations as hand brakes, multi-speed transmissions, and better steering developed. Some were commercially successful in providing mass transit, until a backlash against these large speedy vehicles resulted in the passage of the Locomotive Act (1865), which required many self-propelled vehicles on public roads in the United Kingdom to be preceded by a man on foot waving a red flag and blowing a horn. This effectively killed road auto development in the UK for most of the rest of the 19th century; inventors and engineers shifted their efforts to improvements in railway locomotives. (The law was not repealed until 1896, although the need for the red flag was removed in 1878.)

Among other efforts, in 1815, a professor at Prague Polytechnic, Josef Bozek, built an oil-fired steam car.p.27 Walter Hancock, builder and operator of London steam buses, in 1838 built a four-seat steam phaeton.:p27

In 1867, Canadian jeweller Henry Seth Taylor demonstrated his 4-wheeled "steam buggy" at the Stanstead Fair in Stanstead, Quebec, and again the following year The basis of the buggy, which he began building in 1865, was a high-wheeled carriage with bracing to support a two-cylinder steam engine mounted on the floor.

What some people define as the first "real" automobile was produced by French Amédée Bollée in 1873, who built self-propelled steam road vehicles to transport groups of passengers.

The American George B. Selden filed for a patent on May 8, 1879. His application included not only the engine but its use in a 4-wheeled car. Selden filed a series of amendments to his application which stretched out the legal process, resulting in a delay of 16 years before the US 549160 was granted on November 5, 1895.

Karl Benz, the inventor of numerous car-related technologies, received a German patent in 1886.

The four-stroke petrol (gasoline) internal combustion engine that constitutes the most prevalent form of modern automotive propulsion is a creation of Nikolaus Otto. The similar four-stroke diesel engine was invented by Rudolf Diesel. The hydrogen fuel cell, one of the technologies hailed as a replacement for gasoline as an energy source for cars, was discovered in principle by Christian Friedrich Schönbein in 1838. The battery electric car owes its beginnings to Ányos Jedlik, one of the inventors of the electric motor, and Gaston Planté, who invented the lead-acid battery in 1859.

The first carriage-sized automobile suitable for use on existing wagon roads in the United States was a steam powered vehicle invented in 1871, by Dr. J.W. Carhart, a minister of the Methodist Episcopal Church, in Racine, Wisconsin. It induced the State of Wisconsin in 1875, to offer a $10,000 award to the first to produce a practical substitute for the use of horses and other animals. They stipulated that the vehicle would have to maintain an average speed of more than five miles per hour over a 200-mile course. The offer led to the first city to city automobile race in the United States, starting on July 16, 1878, in Green Bay, Wisconsin, and ending in Madison, via Appleton, Oshkosh, Waupun, Watertown, Fort Atkinson, and Janesville. While seven vehicles were registered, only two started to compete: the entries from Green Bay and Oshkosh. The vehicle from Green Bay was faster, but broke down before completing the race. The Oshkosh finished the 201 mile course in 33 hours and 27 minutes, and posted an average speed of six miles per hour. In 1879, the legislature awarded half the prize.
20th and 21st centuries


Steam-powered road vehicles, both cars and wagons, reached the peak of their development in the early 1930s with fast-steaming lightweight boilers and efficient engine designs. Internal combustion engines also developed greatly during WWI, becoming simpler to operate and more reliable. The development of the high-speed diesel engine from 1930 began to replace them for wagons, accelerated by tax changes in the UK making steam wagons uneconomic overnight. Although a few designers continued to advocate steam power, no significant developments in production steam cars took place after Doble in 1

Whether steam cars will ever be reborn in later technological eras remains to be seen. Magazines such as Light Steam Power continued to describe them into the 1980s. The 1950s saw interest in steam-turbine cars powered by small nuclear reactors (this was also true of aircraft), but the dangers inherent in nuclear fission technology soon killed these ideas.
Electric automobiles
German Flocken Elektrowagen of 1888, regarded as the first electric car of the world
See also: History of the electric vehicle

In 1828, Ányos Jedlik, a Hungarian who invented an early type of electric motor, created a tiny model car powered by his new motor.In 1834, Vermont blacksmith Thomas Davenport, the inventor of the first American DC electrical motor, installed his motor in a small model car, which he operated on a short circular electrified track. In 1835, Professor Sibrandus Stratingh of Groningen, the Netherlands and his assistant Christopher Becker created a small-scale electrical car, powered by non-rechargeable primary cells.In 1838, Scotsman Robert Davidson built an electric locomotive that attained a speed of 4 miles per hour (6 km/h). In England, a patent was granted in 1840 for the use of rail tracks as conductors of electric current, and similar American patents were issued to Lilley and Colten in 1847. Between 1832 and 1839 (the exact year is uncertain) Robert Anderson of Scotland invented the first crude electric carriage, powered by non-rechargeable primary cells.

The Flocken Elektrowagen of 1888 by German inventor Andreas Flocken is regarded as the first real electric car of the world.

Electric cars enjoyed popularity between the late 19th century and early 20th century, when electricity was among the preferred methods for automobile propulsion, providing a level of comfort and ease of operation that could not be achieved by the gasoline cars of the time. Advances in internal combustion technology, especially the electric starter, soon rendered this advantage moot; the greater range of gasoline cars, quicker refueling times, and growing petroleum infrastructure, along with the mass production of gasoline vehicles by companies such as the Ford Motor Company, which reduced prices of gasoline cars to less than half that of equivalent electric cars, led to a decline in the use of electric propulsion, effectively removing it from important markets such as the United States by the 1930s. However, in recent years, increased concerns over the environmental impact of gasoline cars, higher gasoline prices, improvements in battery technology, and the prospect of peak oil, have brought about renewed interest in electric cars, which are perceived to be more environmentally friendly and cheaper to maintain and run, despite high initial costs, after a failed reappearance in the late-1990s.

Advanced Micro Devices

Advanced Micro Devices, Inc. (AMD) is an American worldwide semiconductor company based in Sunnyvale, California, United States, that develops computer processors and related technologies for business and consumer markets. While initially it manufactured its own processors, the company became fabless after GlobalFoundries was spun off in 2009. AMD's main products include microprocessors, motherboard chipsets, embedded processors and graphics processors for servers, workstations and personal computers, and embedded systems applications.

AMD is the second-largest global supplier of microprocessors based on the x86 architecture and also one of the largest suppliers of graphics processing units.

AMD is the only significant rival to Intel in the central processor (CPU) market for (x86 based) personal computers. Since acquiring ATI in 2006, AMD and its competitor Nvidia have dominated the discrete graphics processor unit (GPU) market.
Contents

    1 Company history
        1.1 First twelve years
        1.2 Technology exchange agreement with Intel
    2 Processor market history
        2.1 IBM PC and the x86 architecture
        2.2 K5, K6, Athlon, Duron, and Sempron
        2.3 Athlon 64, Opteron and Phenom
        2.4 Fusion, Bobcat, Bulldozer, Vishera, and Hondo
        2.5 ARM architecture-based chip
        2.6 Zen
    3 Products and technologies
        3.1 Graphics products
        3.2 AMD chipsets
        3.3 AMD Live!
        3.4 AMD Quad FX platform
        3.5 Server platform
        3.6 Desktop platforms
        3.7 Embedded systems
        3.8 Other initiatives
        3.9 Software
    4 Production and fabrication
    5 Corporate affairs
        5.1 Partnerships
        5.2 Litigation with Intel
        5.3 Guinness World Record Achievement
        5.4 Corporate social responsibility
    6 See also
    7 References
    8 Notes
    9 External links

Company history
AMD campus in Markham, Ontario, Canada, formerly ATI headquarters
AMD's LEED-certified Lone Star campus in Austin, Texas
First twelve years

Advanced Micro Devices was formally incorporated on May 1, 1969, by Jerry Sanders, along with seven of his colleagues from Fairchild Semiconductor. Sanders, an electrical engineer who was the director of marketing at Fairchild, had like many Fairchild executives grown frustrated with the increasing lack of support, opportunity, and flexibility within that company, and decided to leave to start his own semiconductor company. The previous year Robert Noyce, who had invented the first practical integrated circuit or microchip in 1959 at Fairchild,[10] had left Fairchild together with Gordon Moore and founded the semiconductor company Intel in July 1968.

In September 1969, AMD moved from its temporary location in Santa Clara to Sunnyvale, California.To immediately secure a customer base, AMD initially became a second source supplier of microchips designed by Fairchild and National Semiconductor.AMD first focused on producing logic chips. The company guaranteed quality control to United States Military Standard, an advantage in the early computer industry since unreliability in microchips was a distinct problem that customers – including computer manufacturers, the telecommunications industry, and instrument manufacturers – wanted to avoid

In November 1969, the company manufactured its first product, the Am9300, a 4-bit MSI shift register, which began selling in 1970. Also in 1970, AMD produced its first proprietary product, the Am2501 logic counter, which was highly successful.Its best-selling product in 1971 was the Am2505, the fastest multiplier available.

In 1971, AMD entered the RAM chip market, beginning with the Am3101, a 64-bit bipolar RAM That year AMD also greatly increased the sales volume of its linear integrated circuits, and by year end the company's total annual sales reached $4.6 million.

AMD went public in September 1972. The company was a second source for Intel MOS/LSI circuits by 1973, with products such as Am14/1506 and Am14/1507, dual 100-bit dynamic shift registers By 1975, AMD was producing 212 products – of which 49 were proprietary, including the Am9102 (a static N-channel 1024-bit RAM)and three low-power Schottky MSI circuits: Am25LS07, Am25LS08, and Am25LS09.

Intel had created the first microprocessor, its 4-bit 4004, in 1971By 1975, AMD entered the microprocessor market with the Am9080, a reverse-engineered clone of the Intel 8080and the Am2900 bit-slice microprocessor familyWhen Intel began installing microcode in its microprocessors in 1976, it entered into a cross-licencing agreement with AMD, granting AMD a copyright license to the microcode in its microprocessors and peripherals, effective October 1976

In 1977, AMD entered into a joint venture with Siemens, a German engineering conglomerate wishing to enhance its technology expertise and enter the U.S. market.Siemens purchased 20% of AMD's stock, giving AMD an infusion of cash to increase its product lines.[That year the two companies also jointly established Advanced Micro Computers, located in Silicon Valley and in Germany, giving AMD an opportunity to enter the microcomputer development and manufacturing field in particular based on AMD's second-source Zilog Z8000 microprocessors. When the two companies' vision for Advanced Micro Computers diverged, AMD bought out Siemens' stake in the U.S. division in 1979. AMD closed its Advanced Micro Computers subsidiary in late 1981, after switching focus to manufacturing second-source Intel x86 microprocessors.

Total sales in fiscal year 1978 topped $100 million,and in 1979, AMD debuted on the New York Stock Exchange. In 1979, production also began in AMD's new semiconductor fab in Austin; the company already had overseas assembly facilities in Penang and Manila,and it began construction on a semiconductor fab in San Antonio in 1981 In 1980, AMD began supplying semiconductor products for telecommunications, an industry undergoing rapid expansion and innovation
Technology exchange agreement with Intel

Intel had introduced the first x86 microprocessors in 1978 In 1981, IBM created its PC, and wanted Intel's x86 processors, but only under the condition that Intel also provide a second-source manufacturer for its patented x86 microprocessorsIntel and AMD entered into a 10-year technology exchange agreement, first signed in October 1981 and formally executed in February 1982 The terms of the agreement were that each company could acquire the right to become a second-source manufacturer for semiconductor products developed by the other; that is, each party could "earn" the right to manufacture and sell a product developed by the other, if agreed to, by exchanging the manufacturing rights to a product of equivalent technical complexity. The technical information and licenses needed to make and sell a part would be exchanged for a royalty to the developing company. The 1982 agreement also extended the 1976 AMD–Intel cross-licensing agreement through 1995 The agreement included the right to invoke arbitration of disagreements, and after five years the right of either party to end the agreement with one year's notice.The main result of the 1982 agreement was that AMD became a second-source manufacturer of Intel's x86 microprocessors and related chips, and Intel provided AMD with database tapes for its 8086, 80186, and 80286 chips

Beginning in 1982, AMD began volume-producing second-source Intel-licensed 8086, 8088, 80186, and 80188 processors, and by 1984 its own Am286 clone of Intel's 80286 processor, for the rapidly growing market of IBM PCs and IBM clones It also continued its successful concentration on proprietary bipolar chips.In 1983, it introduced INT.STD.1000, the highest manufacturing quality standard in the industry.

The company continued to spend greatly on research and development] and in addition to other breakthrough products, created the world's first 512K EPROM in 1984 That year AMD was listed in the book The 100 Best Companies to Work for in America,[and based on 1984 income it made the Fortune 500 list for the first time in 1985.[62][63]

By mid-1985, however, the microchip market experienced a severe downturn, mainly due to longterm aggressive trade practices (dumping) from Japan, but also due to a crowded and non-innovative chip market in the U.S. AMD rode out the mid-1980s crisis by aggressively innovating and modernizingdevising the Liberty Chip program of designing and manufacturing one new chip or chip set per week for 52 weeks in fiscal year 1986, and by heavily lobbying the U.S. government until sanctions and restrictions were put into place to prevent predatory Japanese pricing.[67] During this time period, AMD withdrew from the DRAM market,[68] and at the same time made some headway into the CMOS market, which it had lagged in entering, having focused instead on bipolar chips.[69]

AMD had some success in the mid-1980s with the AMD7910 and AMD7911 "World Chip" FSK modem, one of the first multi-standard devices that covered both Bell and CCITT tones at up to 1200 baud half duplex or 300/300 full duplex.[70] Beginning in 1986, AMD embraced the perceived shift toward RISC with their own AMD Am29000 (29k) processor;the 29k survived as an embedded processorThe company also increased its EPROM memory market share in the late 1980s.[74] Throughout the 1980s, AMD was a second-source supplier of Intel x86 processors. In 1991, it introduced its own 386-compatible Am386, an AMD-designed chip. Creating its own chips, AMD began to compete directly with Intel.

AMD had a large and successful flash memory business, even during the dotcom bust. In 2003, to divest some manufacturing and aid its overall cash flow, which was under duress from aggressive microprocessor competition from Intel, AMD spun-off its flash memory business and manufacturing into Spansion, a joint venture with Fujitsu, which had been co-manufacturing flash memory with AMD since 1993.] AMD divested itself of Spansion in December 2005, in order to focus on the microprocessor market, and Spansion went public in an IPO.

AMD announced the acquisition of the graphics processor company ATI Technologies on July 24, 2006. AMD paid $4.3 billion in cash and 58 million shares of its stock, for a total of approximately $5.4 billion. The transaction completed on October 25, 2006. On August 30, 2010, AMD announced that it would retire the ATI brand name for its graphics chipsets in favor of the AMD brand name

In October 2008, AMD announced plans to spin off manufacturing operations in the form of a multibillion-dollar joint venture with Advanced Technology Investment Co., an investment company formed by the government of Abu Dhabi. The new venture is called GlobalFoundries Inc. The partnership and spin-off gave AMD an infusion of cash and allowed AMD to focus solely on chip design. To assure the Abu Dhabi investors of the new venture's success, CEO Hector Ruiz stepped down as CEO of AMD in July 2008, while remaining Executive Chairman, in preparation to becoming Chairman of Global Foundries in March 2009.[84][85] President and COO Dirk Meyer became AMD's CEO. Recessionary losses necessitated AMD cutting 1,100 jobs in 2009.

In August 2011, AMD announced that former Lenovo executive Rory Read would be joining the company as CEO, replacing Meyer.[88] AMD announced in November 2011 plans to lay off more than 10% (1,400) of its employees from across all divisions worldwide. In October 2012, it announced plans to lay off an additional 15% of its workforce to reduce costs in the face of declining sales revenue.
AMD acquired the low-power server manufacturer SeaMicro in early 2012, with an eye to bringing out an ARM architecture server chip.

On October 8, 2014, AMD announced that Rory Read had stepped down after three years as president and chief executive officer.He was succeeded by Lisa Su, a key lieutenant who had been serving as chief operating officer since June.[92]

On October 16, 2014, AMD announced a new restructuring plan along with its Q3 results. Effective July 1, 2014, AMD reorganized into two business groups: Computing and Graphics, which primarily includes desktop and notebook processors and chipsets, discrete GPUs, and professional graphics; and Enterprise, Embedded and Semi-Custom, which primarily includes server and embedded processors, dense servers, semi-custom SoC products, engineering services, and royalties. As part of this restructuring AMD announced that 7% of its global workforce would be laid off by the end of 2014.

History Of Computer Architecture

The first documented computer architecture was in the correspondence between Charles Babbage and Ada Lovelace, describing the analytical engine. Two other early and important examples were:

    John von Neumann's 1945 paper, First Draft of a Report on the EDVAC, which described an organization of logical elements; and
    Alan Turing's more detailed Proposed Electronic Calculator for the Automatic Computing Engine, also 1945 and which cited von Neumann's paper

The term “architecture” in computer literature can be traced to the work of Lyle R. Johnson, Mohammad Usman Khan and Frederick P. Brooks, Jr., members in 1959 of the Machine Organization department in IBM’s main research center. Johnson had the opportunity to write a proprietary research communication about the Stretch, an IBM-developed supercomputer for Los Alamos Scientific Laboratory. To describe the level of detail for discussing the luxuriously embellished computer, he noted that his description of formats, instruction types, hardware parameters, and speed enhancements were at the level of “system architecture” – a term that seemed more useful than “machine organization.”

Subsequently, Brooks, a Stretch designer, started Chapter 2 of a book (Planning a Computer System: Project Stretch, ed. W. Buchholz, 1962) by writing,

    Computer architecture, like other architecture, is the art of determining the needs of the user of a structure and then designing to meet those needs as effectively as possible within economic and technological constraints.

Brooks went on to help develop the IBM System/360 (now called the IBM zSeries) line of computers, in which “architecture” became a noun defining “what the user needs to know”. Later, computer users came to use the term in many less-explicit ways.

The earliest computer architectures were designed on paper and then directly built into the final hardware form. Later, computer architecture prototypes were physically built in the form of a Transistor–Transistor Logic (TTL) computer—such as the prototypes of the 6800 and the PA-RISC—tested, and tweaked, before committing to the final hardware form. As of the 1990s, new computer architectures are typically "built", tested, and tweaked—inside some other computer architecture in a computer architecture simulator; or inside a FPGA as a soft microprocessor; or both—before committing to the final hardware form.
Subcategories

The discipline of computer architecture has three main subcategories:

    Instruction Set Architecture, or ISA. The ISA defines the machine code that a processor reads and acts upon as well as the word size, memory address modes, processor registers, and data formats.
    Microarchitecture, or computer organization describes how a particular processor will implement the ISA.The size of a computer's CPU cache for instance, is an organizational issue that generally has nothing to do with the ISA.
    System Design includes all of the other hardware components within a computing system. These include:
        Data processing other than the CPU, such as direct memory access (DMA)
        Other issues such as virtualization, multiprocessing and software features.

Some architects at companies such as Intel and AMD use finer distinctions:

    Macroarchitecture: architectural layers more abstract than microarchitecture
    Instruction Set Architecture (ISA): as above but without:
        Assembly ISA: a smart assembler may convert an abstract assembly language common to a group of machines into slightly different machine language for different implementations
    Programmer Visible Macroarchitecture: higher level language tools such as compilers may define a consistent interface or contract to programmers using them, abstracting differences between underlying ISA, UISA, and microarchitectures. E.g. the C, C++, or Java standards define different Programmer Visible Macroarchitecture.
    UISA (Microcode Instruction Set Architecture)—a family of machines with different hardware level microarchitectures may share a common microcode architecture, and hence a UISA.
    Pin Architecture: The hardware functions that a microprocessor should provide to a hardware platform, e.g., the x86 pins A20M, FERR/IGNNE or FLUSH. Also, messages that the processor should emit so that external caches can be invalidated (emptied). Pin architecture functions are more flexible than ISA functions because external hardware can adapt to new encodings, or change from a pin to a message. The term "architecture" fits, because the functions must be provided for compatible systems, even if the detailed method changes.

The Roles
Definition

The purpose is to design a computer that maximizes performance while keeping power consumption in check, costs low relative to the amount of expected performance, and is also very reliable. For this, many aspects are to be considered which includes Instruction Set Design, Functional Organization, Logic Design, and Implementation. The implementation involves Integrated Circuit Design, Packaging, Power, and Cooling. Optimization of the design requires familiarity with Compilers, Operating Systems to Logic Design and Packaging.
Instruction set architecture
An instruction set architecture (ISA) is the interface between the computer's software and hardware and also can be viewed as the programmer's view of the machine. Computers do not understand high level languages which have few, if any, language elements that translate directly into a machine's native opcodes. A processor only understands instructions encoded in some numerical fashion, usually as binary numbers. Software tools, such as compilers, translate high level languages, such as C, into instructions.

Besides instructions, the ISA defines items in the computer that are available to a program—e.g. data types, registers, addressing modes, and memory. Instructions locate operands with Register indexes (or names) and memory addressing modes.

The ISA of a computer is usually described in a small book or pamphlet, which describes how the instructions are encoded. Also, it may define short (vaguely) mnenonic names for the instructions. The names can be recognized by a software development tool called an assembler. An assembler is a computer program that translates a human-readable form of the ISA into a computer-readable form. Disassemblers are also widely available, usually in debuggers, software programs to isolate and correct malfunctions in binary computer programs.

ISAs vary in quality and completeness. A good ISA compromises between programmer convenience (more operations can be better), cost of the computer to interpret the instructions (cheaper is better), speed of the computer (faster is better), and size of the code (smaller is better). For example, a single-instruction ISA is possible, inexpensive, and fast, (e.g., subtract and jump if zero. It was actually used in the SSEM), but it was not convenient or helpful to make programs small. Memory organization defines how instructions interact with the memory, and also how different parts of memory interact with each other.

During design emulation software can run programs written in a proposed instruction set. Modern emulators running tests may measure time, energy consumption, and compiled code size to determine if a particular instruction set architecture is meeting its goals.
Computer organization
Main article: Microarchitecture

Computer organization helps optimize performance-based products. For example, software engineers need to know the processing ability of processors. They may need to optimize software in order to gain the most performance at the least expense. This can require quite detailed analysis of the computer organization. For example, in a multimedia decoder, the designers might need to arrange for most data to be processed in the fastest data path.

Computer organization also helps plan the selection of a processor for a particular project. Multimedia projects may need very rapid data access, while supervisory software may need fast interrupts. Sometimes certain tasks need additional components as well. For example, a computer capable of virtualization needs virtual memory hardware so that the memory of different simulated computers can be kept separated. Computer organization and features also affect power consumption and processor cost.
Implementation

Once an instruction set and micro-architecture are described, a practical machine must be designed. This design process is called the implementation. Implementation is usually not considered architectural definition, but rather hardware design engineering. Implementation can be further broken down into several (not fully distinct) steps:

    Logic Implementation designs the blocks defined in the micro-architecture at (primarily) the register-transfer level and logic gate level.
    Circuit Implementation does transistor-level designs of basic elements (gates, multiplexers, latches etc.) as well as of some larger blocks (ALUs, caches etc.) that may be implemented at this level, or even (partly) at the physical level, for performance reasons.
    Physical Implementation draws physical circuits. The different circuit components are placed in a chip floorplan or on a board and the wires connecting them are routed.
    Design Validation tests the computer as a whole to see if it works in all situations and all timings. Once implementation starts, the first design validations are simulations using logic emulators. However, this is usually too slow to run realistic programs. So, after making corrections, prototypes are constructed using Field-Programmable Gate-Arrays (FPGAs). Many hobby projects stop at this stage. The final step is to test prototype integrated circuits. Integrated circuits may require several redesigns to fix problems.

For CPUs, the entire implementation process is often called CPU design.

Hisotry of artificial Intelligence

Males: History of artificial intelligence and Timeline of artificial intelligencein artic

Thinking machines and artificial beings appear in Greek myths, such as Talos of Crete, the bronze robot of Hephaestus, and Pygmalion's Galatea. Human likenesses believed to have intelligence were built in every major civilization: animated cult images were worshiped in Egypt and Greece and humanoid automatons were built by Yan Shi, Hero of Alexandria and Al-Jazari. It was also widely believed that artificial beings had been created by Jābir ibn Hayyān, Judah Loew and Paracelsus. By the 19th and 20th centuries, artificial beings had become a common feature in fiction, as in Mary Shelley's Frankenstein or Karel Čapek's R.U.R. (Rossum's Universal Robots).Pamela McCorduck argues that all of these are some examples of an ancient urge, as she describes it, "to forge the gods". Stories of these creatures and their fates discuss many of the same hopes, fears and ethical concerns that are presented by artificial intelligence.

Mechanical or "formal" reasoning has been developed by philosophers and mathematicians since antiquity. The study of logic led directly to the invention of the programmable digital electronic computer, based on the work of mathematician Alan Turing and others. Turing's theory of computation suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate any conceivable act of mathematical deduction.[18][19] This, along with concurrent discoveries in neurology, information theory and cybernetics, inspired a small group of researchers to begin to seriously consider the possibility of building an electronic brain.
The field of AI research was founded at a conference on the campus of Dartmouth College in the summer of 1956.[21] The attendees, including John McCarthy, Marvin Minsky, Allen Newell, Arthur Samuel, and Herbert Simon, became the leaders of AI research for many decades. They and their students wrote programs that were, to most people, simply astonishing:computers were winning at checkers, solving word problems in algebra, proving logical theorems and speaking English. By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defense and laboratories had been established around the world.[26] AI's founders were profoundly optimistic about the future of the new field: Herbert Simon predicted that "machines will be capable, within twenty years, of doing any work a man can do" and Marvin Minsky agreed, writing that "within a generation ... the problem of creating 'artificial intelligence' will substantially be solved".
They had failed to recognize the difficulty of some of the problems they faced.[28] In 1974, in response to the criticism of Sir James Lighthilland ongoing pressure from the US Congress to fund more productive projects, both the U.S. and British governments cut off all undirected exploratory research in AI. The next few years would later be called an "AI winter", a period when funding for AI projects was hard to find.

In the early 1980s, AI research was revived by the commercial success of expert systems,[31] a form of AI program that simulated the knowledge and analytical skills of one or more human experts. By 1985 the market for AI had reached over a billion dollars. At the same time, Japan's fifth generation computer project inspired the U.S and British governments to restore funding for academic research in the field.[32] However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer lasting AI winter began.
In the 1990s and early 21st century, AI achieved its greatest successes, albeit somewhat behind the scenes. Artificial intelligence is used for logistics, data mining, medical diagnosis and many other areas throughout the technology industry. The success was due to several factors: the increasing computational power of computers (see Moore's law), a greater emphasis on solving specific subproblems, the creation of new ties between AI and other fields working on similar problems, and a new commitment by researchers to solid mathematical methods and rigorous scientific standards.On 11 May 1997, Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov.In February 2011, in a Jeopardy! quiz show exhibition match, IBM's question answering system, Watson, defeated the two greatest Jeopardy champions, Brad Rutter and Ken Jennings, by a significant margin.The Kinect, which provides a 3D body–motion interface for the Xbox 360 and the Xbox One, uses algorithms that emerged from lengthy AI research as do intelligent personal assistants in smartphones.Research
Goals

    You awake one morning to find your brain has another lobe functioning. Invisible, this auxiliary lobe answers your questions with information beyond the realm of your own memory, suggests plausible courses of action, and asks questions that help bring out relevant facts. You quickly come to rely on the new lobe so much that you stop wondering how it works. You just use it. This is the dream of artificial intelligence.
    —BYTE, April 1985The general problem of simulating (or creating) intelligence has been broken down into a number of specific sub-problems. These consist of particular traits or capabilities that researchers would like an intelligent system to display. The traits described below have received the most attention.[6]
Deduction, reasoning, problem solving

Early AI researchers developed algorithms that imitated the step-by-step reasoning that humans use when they solve puzzles or make logical deductions.By the late 1980s and 1990s, AI research had also developed highly successful methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.
For difficult problems, most of these algorithms can require enormous computational resources – most experience a "combinatorial explosion": the amount of memory or computer time required becomes astronomical when the problem goes beyond a certain size. The search for more efficient problem-solving algorithms is a high priority for AI research.
Human beings solve most of their problems using fast, intuitive judgements rather than the conscious, step-by-step deduction that early AI research was able to model.[43] AI has made some progress at imitating this kind of "sub-symbolic" problem solving: embodied agent approaches emphasize the importance of sensorimotor skills to higher reasoning; neural net research attempts to simulate the structures inside the brain that give rise to this skill; statistical approaches to AI mimic the probabilistic nature of the human ability to guess.
Knowledge representation
An ontology represents knowledge as a set of concepts within a domain and the relationships between those concepts.
Main articles: Knowledge representation and Commonsense knowledge

Knowledge representation and knowledge engineering are central to AI research. Many of the problems machines are expected to solve will require extensive knowledge about the world. Among the things that AI needs to represent are: objects, properties, categories and relations between objects; situations, events, states and time; causes and effects;[48] knowledge about knowledge (what we know aut what other people know) and many other, less well researched domains. A representation of "what exists" is an ontology: the set of objects, relations, concepts and so on that the machine knows about. The most general are called upper ontologies, which attempt to provide a foundation for all other knowledge.
Among the most difficult problems in knowledge representation are:

Default reasoning and the qualification problem
    Many of the things people know take the form of "working assumptions." For example, if a bird comes up in conversation, people typically picture an animal that is fist sized, sings, and flies. None of these things are true about all birds. John McCarthy identified this problem in 1969 as the qualification problem: for any commonsense rule that AI researchers care to represent, there tend to be a huge number of exceptions. Almost nothing is simply true or false in the way that abstract logic requires. AI research has explored a number of solutions to this problem.
The breadth of commonsense knowledge
    The number of atomic facts that the average person knows is astronomical. Research projects that attempt to build a complete knowledge base of commonsense knowledge (e.g., Cyc) require enormous amounts of laborious ontological engineering—they must be built, by hand, one complicated concept at a time.[53] A major goal is to have the computer understand enough concepts to be able to learn by reading from sources like the internet, and thus be able to add to its own ontology.[citation needed]

The subsymbolic form of some commonsense knowledge
    Much of what people know is not represented as "facts" or "statements" that they could express verbally. For example, a chess master will avoid a particular chess position because it "feels too exposed"[54] or an art critic can take one look at a statue and instantly realize that it is a fake. These are intuitions or tendencies that are represented in the brain non-consciously and sub-symbolically.[56] Knowledge like this informs, supports and provides a context for symbolic, conscious knowledge. As with the related problem of sub-symbolic reasoning, it is hoped that situated AI, computational intelligence, or statistical AI will provide ways to represent this kind of knowledge.

Integrated Circut


"Silicon chip" redirects here. For the electronics magazine, see Silicon Chip.
"Microchip" redirects here. For other uses, see Microchip (disambiguation).
Erasable programmable read-only memory integrated circuits. These packages have a transparent window that shows the die inside. The window allows the memory to be erased by exposing the chip to ultraviolet light.
Integrated circuit from an EPROM memory microchip showing the memory blocks, the supporting circuitry and the fine silver wires which connect the integrated circuit die to the legs of the packaging.
Synthetic detail of an integrated circuit through four layers of planarized copper interconnect, down to the polysilicon (pink), wells (greyish), and substrate (green)

An integrated circuit or monolithic integrated circuit (also referred to as an IC, a chip, or a microchip) is a set of electronic circuits on one small plate ("chip") of semiconductor material, normally silicon. This can be made much smaller than a discrete circuit made from independent electronic components. ICs can be made very compact, having up to several billion transistors and other electronic components in an area the size of a fingernail. The width of each conducting line in a circuit can be made smaller and smaller as the technology advances; in 2008 it dropped below 100 nanometers,and has now been reduced to tens of nanometers.
ICs were made possible by experimental discoveries showing that semiconductor devices could perform the functions of vacuum tubes and by mid-20th-century technology advancements in semiconductor device fabrication. The integration of large numbers of tiny transistors into a small chip was an enormous improvement over the manual assembly of circuits using discrete electronic components. The integrated circuit's mass production capability, reliability and building-block approach to circuit design ensured the rapid adoption of standardized integrated circuits in place of designs using discrete transistors.

ICs have two main advantages over discrete circuits: cost and performance. Cost is low because the chips, with all their components, are printed as a unit by photolithography rather than being constructed one transistor at a time. Furthermore, packaged ICs use much less material than discrete circuits. Performance is high because the IC's components switch quickly and consume little power (compared to their discrete counterparts) as a result of the small size and close proximity of the components. As of 2012, typical chip areas range from a few square millimeters to around 450 mm2, with up to 9 million transistors per mm2.

Integrated circuits are used in virtually all electronic equipment today and have revolutionized the world of electronics. Computers, mobile phones, and other digital home appliances are now inextricable parts of the structure of modern societies, made possible by the low cost of integrated circuits.


Terminology

An integrated circuit is defined as

    A circuit in which all or some of the circuit elements are inseparably associated and electrically interconnected so that it is considered to be indivisible for the purposes of construction and commerce.

Circuits meeting this definition can be constructed using many different technologies, including thin-film transistor, thick film technology, or hybrid integrated circuit. However, in general usage integrated circuit has come to refer to the single-piece circuit construction originally known as a monolithic integrated circuit.Invention
Main article: Invention of the integrated circuit

Early developments of the integrated circuit go back to 1949, when German engineer Werner Jacobi (Siemens AG)filed a patent for an integrated-circuit-like semiconductor amplifying device showing five transistors on a common substrate in a 3-stage amplifier arrangement. Jacobi disclosed small and cheap hearing aids as typical industrial applications of his patent. An immediate commercial use of his patent has not been reported.

The idea of the integrated circuit was conceived by Geoffrey W.A. Dummer (1909–2002), a radar scientist working for the Royal Radar Establishment of the British Ministry of Defence. Dummer presented the idea to the public at the Symposium on Progress in Quality Electronic Components in Washington, D.C. on 7 May 1952.[8] He gave many symposia publicly to propagate his ideas, and unsuccessfully attempted to build such a circuit in 1956.

A precursor idea to the IC was to create small ceramic squares (wafers), each containing a single miniaturized component. Components could then be integrated and wired into a bidimensional or tridimensional compact grid. This idea, which seemed very promising in 1957, was proposed to the US Army by Jack Kilby and led to the short-lived Micromodule Program (similar to 1951's Project Tinkertoy).[9] However, as the project was gaining momentum, Kilby came up with a new, revolutionary design: the IC.
Jack Kilby's original integrated circuit

Newly employed by Texas Instruments, Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958.[10] In his patent application of 6 February 1959,[11] Kilby described his new device as “a body of semiconductor material … wherein all the components of the electronic circuit are completely integrated.” The first customer for the new invention was the US Air Force.
Kilby won the 2000 Nobel Prize in Physics for his part in the invention of the integrated circuit.[14] His work was named an IEEE Milestone in 2009.
Half a year after Kilby, Robert Noyce at Fairchild Semiconductor developed his own idea of an integrated circuit that solved many practical problems Kilby's had not. Noyce's design was made of silicon, whereas Kilby's chip was made of germanium. Noyce credited Kurt Lehovec of Sprague Electric for the principle of p–n junction isolation caused by the action of a biased p–n junction (the diode) as a key concept behind the IC.
Fairchild Semiconductor was also home of the first silicon-gate IC technology with self-aligned gates, the basis of all modern CMOS computer chips. The technology was developed by Italian physicist Federico Faggin in 1968, who later joined Intel in order to develop the very first single-chip Central Processing Unit (CPU) (Intel 4004), for which he received the National Medal of Technology and Innovation in 2010.
Generations

In the early days of simple integrated circuits, the technology's large scale limited each chip to only a few transistors, and the low degree of integration meant the design process was relatively simple. Manufacturing yields were also quite low by today's standards. As the technology progressed, millions, then billions of transistors could be placed on one chip, and good designs required thorough planning, giving rise to new design methods.
Name     Signification     Year     Transistors number[18]     Logic gates number[19]
SSI     small-scale integration     1964     1 to 10     1 to 12
MSI     medium-scale integration     1968     10 to 500     13 to 99
LSI     large-scale integration     1971     500 to 20,000     100 to 9,999
VLSI     very large-scale integration     1980     20,000 to 1,000,000     10,000 to 99,999
ULSI     ultra-large-scale integration     1984     1,000,000 and more     100,000 and more
SSI, MSI and LSI

The first integrated circuits contained only a few transistors. Early digital circuits containing tens of transistors provided a few logic gates, and early linear ICs such as the Plessey SL201 or the Philips TAA320 had as few as two transistors. The number of transistors in an integrated circuit has increased dramatically since then. The term "large scale integration" (LSI) was first used by IBM scientist Rolf Landauer when describing the theoretical concept;[citation needed] that term gave rise to the terms "small-scale integration" (SSI), "medium-scale integration" (MSI), "very-large-scale integration" (VLSI), and "ultra-large-scale integration" (ULSI). The early integrated circuits were SSI.

SSI circuits were crucial to early aerospace projects, and aerospace projects helped inspire development of the technology. Both the Minuteman missile and Apollo program needed lightweight digital computers for their inertial guidance systems; the Apollo guidance computer led and motivated the integrated-circuit technology,[20] while the Minuteman missile forced it into mass-production. The Minuteman missile program and various other Navy programs accounted for the total $4 million integrated circuit market in 1962, and by 1968, U.S. Government space and defense spending still accounted for 37% of the $312 million total production. The demand by the U.S. Government supported the nascent integrated circuit market until costs fell enough to allow firms to penetrate the industrial and eventually the consumer markets. The average price per integrated circuit dropped from $50.00 in 1962 to $2.33 in 1968.[21] Integrated circuits began to appear in consumer products by the turn of the decade, a typical application being FM inter-carrier sound processing in television receivers.

The first MOS chips were small-scale integrated chips for NASA satellites.

The next step in the development of integrated circuits, taken in the late 1960s, introduced devices which contained hundreds of transistors on each chip, called "medium-scale integration" (MSI).

In 1964, Frank Wanlass demonstrated a single-chip 16-bit shift register he designed, with an incredible (for the time) 120 transistors on a single chip.
MSI devices were attractive economically because while they cost little more to produce than SSI devices, they allowed more complex systems to be produced using smaller circuit boards, less assembly work (because of fewer separate components), and a number of other advantages.

Further development, driven by the same economic factors, led to "large-scale integration" (LSI) in the mid-1970s, with tens of thousands of transistors per chip.

SSI and MSI devices often were manufactured by masks created by hand-cutting Rubylith; an engineer would inspect and verify the completeness of each mask. LSI devices contain so many transistors, interconnecting wires, and other features that it is considered impossible for a human to check the masks or even do the original design entirely by hand; the engineer depends on computer programs and other hardware aids to do most of this work.
Integrated circuits such as 1K-bit RAMs, calculator chips, and the first microprocessors, that began to be manufactured in moderate quantities in the early 1970s, had under 4000 transistors. True LSI circuits, approaching 10,000 transistors, began to be produced around 1974, for computer main memories and second-generation microprocessors.
VLSI
Main article: Very-large-scale integration
Upper interconnect layers on an Intel 80486DX2 microprocessor die

The final step in the development process, starting in the 1980s and continuing through the present, was "very-large-scale integration" (VLSI). The development started with hundreds of thousands of transistors in the early 1980s, and continues beyond several billion transistors as of 2009.

Multiple developments were required to achieve this increased density. Manufacturers moved to smaller design rules and cleaner fabrication facilities, so that they could make chips with more transistors and maintain adequate yield. The path of process improvements was summarized by the International Technology Roadmap for Semiconductors (ITRS). Design tools improved enough to make it practical to finish these designs in a reasonable time. The more energy-efficient CMOS replaced NMOS and PMOS, avoiding a prohibitive increase in power consumption.

In 1986 the first one-megabit RAM chips were introduced, containing more than one million transistors. Microprocessor chips passed the million-transistor mark in 1989 and the billion-transistor mark in 2005.[25] The trend continues largely unabated, with chips introduced in 2007 containing tens of billions of memory transistors.[26]
ULSI, WSI, SOC and 3D-IC

To reflect further growth of the complexity, the term ULSI that stands for "ultra-large-scale integration" was proposed for chips of more than 1 million transistors.
Wafer-scale integration (WSI) is a means of building very large integrated circuits that uses an entire silicon wafer to produce a single "super-chip". Through a combination of large size and reduced packaging, WSI could lead to dramatically reduced costs for some systems, notably massively parallel supercomputers. The name is taken from the term Very-Large-Scale Integration, the current state of the art when WSI was being developed.[28]

A system-on-a-chip (SoC or SOC) is an integrated circuit in which all the components needed for a computer or other system are included on a single chip. The design of such a device can be complex and costly, and building disparate components on a single piece of silicon may compromise the efficiency of some elements. However, these drawbacks are offset by lower manufacturing and assembly costs and by a greatly reduced power budget: because signals among the components are kept on-die, much less power is required (see Packaging).
A three-dimensional integrated circuit (3D-IC) has two or more layers of active electronic components that are integrated both vertically and horizontally into a single circuit. Communication between layers uses on-die signaling, so power consumption is much lower than in equivalent separate circuits. Judicious use of short vertical wires can substantially reduce overall wire length for faster operation.

Micro Controller

does a job. It contains an integrated processor, memory (a small amount of RAM, program memory, or both), and programmable input/output peripherals, which are used to interact with things connected to the chip[1] A microcontroller is different than a microprocessor which only contains a CPU (the kind used in a PC). [2]

First released in 1971 by the Intel company, microcontrollers began to become popular in their first few years. The extremely useful Intel 8008 microprocessor was then released, but it was still impractical because of high cost for each chip. These first microcontrollers combined different types of computer memory on one unit.After people began to see how useful they were, micro controllers were constantly being upgraded, with people trying to find new ways to make them better. Cost was reduced over time and by the early 2000s, micro controllers were widely used across the world.

Other terms for a microcontroller are embedded system and embedded controller, because the microcontroller and its support circuits are often built into, or embedded in, a single chip.
In addition to the usual arithmetic and logic elements of a general microprocessor, the microcontroller also has additional elements such as RAM for data storage, read-only memory for program storage, flash memory for permanent data storage, and other devices (peripherals).
Microcontrollers often operate at very low speed compared to microprocessors (at clock speeds of as little as 32 kHz), but this is useful for typical applications. They also consume very little power (milliwatts or even micro watts).
Microcontrollers are used in automatic products and devices, such as car engine systems, remote controls, machines, appliances, power tools, and toys. These are called embedded systems. Microcontrollers can also be found at work in solar power and energy harvesting, anti-lock braking systems in cars, and have many uses in the medical field as well.