# Crash Course Computer Science Fundamentals

The science is typically

cumulative, i.e. build up more powerful

tools to perform more accurate measurements, need better and are

extended concepts of theory and so on. While paradigms can change, research

usually evolves based on past performance, which constitutes the

foundation for further development. The scientist will be safer

in their research and more prepared for new challenges if

you know how your particular issue has evolved historically,

what the greatest difficulties, the solutions

and the outstanding problems. In the more traditional sciences Philosophy, Mathematics,

Physics, Biology, etc. There is always studies history along with

much other dedicated to thinkers, inventors and conquerors first, second or third

magnitude, as well as numerous monographs. In the case of computing, it is

necessary to appear works as the basis and reference for students,

new researchers and those interested in the theoretical aspects

that are behind this technology that dominates daily life this

weekend and early millennia. The Computer History is marked

by sudden interruptions by unexpected and

unforeseen changes, making it difficult to view the

evolution of computers by a simple linear list of

inventions-names-dates. The desire to know the

links that the work of individual men settled

in time is accompanied by the urge to understand

the weight of these acts throughout the

History of Computing. Seek an understanding of the facts through

the events that preceded it is one of the primary objectives that will be present

in this study the History of Computing. Computer science is a body of knowledge

consisting of a conceptual infrastructure and technological building where to

materialize the hardware and software. The first founded the second and preceded. The theory of computation

has its independent development, in large

part, technology. This theory is based on the definition

and construction of abstract machines, and power of the study of

these machines in troubleshooting. Men through the ages, have

sought to develop effective methods for solving different

types of problems. The constant concern to minimize

repetitive and tedious effort produced the development of machines that began

to replace men in certain tasks. Among these is the computer,

which has expanded rapidly filled and modern spaces

in which people circulate. From the appearance of the notion of

natural number, through arithmetical notation and the more bound notation

to algebraic calculation, it is shown as they appeared fixed rules

that allowed computing quickly and accurately, saving, as he said Leibniz,

the spirit, and the imagination. “Descartes believed in the

systematic use of algebra as a powerful and universal

method to solve all problems. This belief has joined the

other and is the first ideas on universal machines, able

to solve all the problems. It was a powerful belief

mind who left respectable works in mathematics and

sciences in general.” Alan Mathison Turing:

the cradle of Computing The computer revolution began

effectively to be held in 1935, on a summer afternoon in England, when

Alan Mathison Turing (1912-1954), a student of King’s College,

Cambridge, during a course taught by mathematician Max Neumann,

took note of Hilbert. Meanwhile, as was briefly mentioned

in the previous item, a part of the mathematical community was

seeking a new kind of logical calculus, which could, among

other things, put in a sustained mathematical basis heuristic concept

that is to carry out a calculation. The result of this research was fundamental

to the development of mathematics: it was whether it is possible to

have an efficient procedure to solve all the problems of a particular

class that was well defined. All these efforts eventually

form the theoretical foundation of what came to be called

“Computer Science.” The results of Gödel and Turing

motivated decision problem first try to characterize exactly which

functions are capable of being computed. In 1936, Turing was acclaimed as

one of the greatest mathematicians of his time, when he foresee

his colleagues that you can perform computational operations

on the theory of numbers through a machine that has built

the rules of a formal system. Turing defined a theoretical

machine that has become a key concept within the

Theory of Computation. He emphasized from the

beginning that such mechanisms could be built, and its

discovery is just opening a new perspective to the effort to

formalize mathematics, and at the same time strongly marked

the History of Computing. The genial perception of Turing was

the replacement of the intuitive notion of an effective procedure

for a formal idea, mathematics. The result was the

construction of a mathematical concept of algorithm notion,

a notion that he modeled building on the steps that

a human being is when performing a particular

calculation or computation. It formalized the concept of algorithm. The Turing Machine One of the first models of the

abstract machine was Turing machine. As Turing himself wrote: “Computing is written symbols on paper. Assume that the paper

is a grid, the flatness can be ignored, and

it is not essential. Suppose the number of symbols is finite. The computer behavior is

determined by the symbols which he notes at one point, and his

mental state at that time. Suppose there is a maximum

number of symbols or squares that he can

watch every moment. To observe more successive

operations are required. Let us assume a finite

number of mental states. Let’s imagine that the actions

taken by the computer will be divided into such elementary

operations that are indivisible. Each action consists of changing

the computer system and paper. The state of the system is

given by the symbols on paper, the symbols observed by the

computer and your mental state. Every operation, no more than a symbol is

changed, and only the observed change. Besides changing symbols,

operations must change the focus of observation,

and it is reasonable that this change should be made for symbols

located at a fixed distance of the above. Some of these operations

involve mental state changes the computer and then determine

what the next action. ” Turing’s work was documented in

On Computable Numbers with an application to the Entscheidungsproblem,

published in 1936. He described in mathematically

precise terms how powerful it can be an automatic formal system with

very simple rules of operation. Turing determined that

mental calculations consist of operations to transform

numbers into a series of intermediate states progressing

from one to another according to a fixed set of

rules until an answer is found. Sometimes it is using paper and pencil, not

to lose the status of our calculations. The mathematics rules require stricter

definitions than those described in metaphysical discussions about the

states of the human mind, and he focused on the definition of these states so

that they were clear and unambiguous, that such definitions could be used

to command the machine operations. Turing began with a

precise description of a formal system in the form

of “instructions table” that specify the moves

to be made for any possible configuration

of states in the system. Then proved that the steps

of a formal axiomatic system similar to logic

and machine states which make the “movements” in an automatic

formal system are equivalent to each other. These concepts are all

underlying the current technology of digital

computers, whose construction became

possible a decade after the publication of the

English mathematician. An automatic formal system is a

physical device that automatically handles the symbols of a

formal system by its rules. The theoretical Turing machine establishes

both an example of his theory of computation as evidence that certain types

of computing machines could be built. Actually a Universal Turing

Machine except for speed, which depends on the hardware, can

emulate any current computer, from supercomputers to personal computers,

with their complex structures, powerful computing capabilities,

since no matter the time spent. He proved that for any formal

system exists a Turing machine that can be

programmed to emulate it. Or in other words: for any well-defined

computational procedure, a Universal Turing Machine can simulate a mechanical

process to perform such procedures. From a theoretical point of

view, the importance of Turing machine is the fact that it is

a formal mathematical object. Through it the first

time, it gave a good definition of what it means

to compute something. And this raises the

question of what can be accurately computed with

such mathematical device, subject outside the

scope of this work and entering the field of

computational complexity. The ACE computer and

artificial intelligence Turing was Sent to America

to exchange information with the US intelligence and

know the projects related to computers, it took note

of the emerging electronic technologies and even

participate in another secret project, Delilah, a vocoder

(known in movie espionage as scramblers), entered into

contact with von Neumann (who wanted to bring him to

join them in their projects) and the engineers of Bell

(including Claude Shannon). Back to England, he joined

the National Physical Laboratory, where he

worked on the development of the Automatic Computing

Engine (ACE), one of the first attempts to

build a digital computer. At the end of the war, already held the

knowledge of new electronic technologies that could be used to increase the speed

of the then existing logic circuits. The real possibility of constructing

models of Universal Turing machines caused the British government

to invest in the construction of this new device, but the Americans

were more aggressive in their investments and ended up winning

the race to build computers. Turing, a victim of political intrigue, was

out of the center and control of new jobs. Its technical reports on hardware

designs and software ACE were ambitious, and the machine

originally imagined by him had been built immediately, the British

would not have embittered the delay about their colleagues on

the other side of the Atlantic. It was also during the season of

ACE Turing began to explore the relationship between the computer and

the mental processes by publishing an article Computing Machinery and

Intelligence (1950), about the possibility of building machines that

imitate the human brain function. Can a machine think, he wondered

in his article, and in addition to focusing on the subject machine

intelligence, Turing gained particular notoriety by trying to introduce,

through this article, a test to decide if you really can or not a

machine think imitating the man. In November 1991, the Boston Computer

Museum held a competition among eight programs to simulate the Turing Test, won

by a program called PC Therapist III. The problem of Turing test is behaviorist

nature, that is, only observes the outward behavior, which gives you

a character somewhat reductionist. Serious disputes have

occurred and still occur on this subject, which is beyond

the scope of this book. Computer programming Turing was also interested in

the planning of operations of a computer – which then began to

be called coding – regarding mathematical operations involved

there and began to write programming languages, then advanced

to the hardware of the time. Turing was convinced that calculation

operations were only one type of formal systems that

could be imitated by computers. In particular, he noticed how the

tables of his theoretical machine could become elements of an excellent grammar

that would use machines to modify their Turing operations innovated to start

drafting instructions tables, which automatically would convert decimal

notation we are used to in binary digits. These could be read by machines that began

to be built based on Boolean algebra. Turing foresaw so in the

future, and programmers could work with the languages

now known as high-level. He said: “The instruction tables

should be made by mathematicians with expertise in computers and some ability

to solve difficult problem solving. There will be enough work to be done

such, that all the known processes have to be converted in the form of

instructions tables at any given time. This task will continue in parallel

with the construction of the machine, to avoid delays at the end of this

and the production of results. Delays may occur due to

virtual unknown obstacles, to the point where it will be

better to let the obstacles there than spend time

designing something without problems (how many decades

will take these things?). This process of preparation of

instructions tables will be fascinating.” Turing’s work in Computing and

Mathematics was tragically ended by his suicide in June

1954 at the age of 42 years. Turing was gay and, after

the escape of two British spies the same trend for

the Soviet Union in the early 1950s, there was a

proper pressure on him to correct his condition through

the use of hormones. Turing, not bearing the heavy

pressure, took cyanide. Technological Prehistory As has been said, it was only

possible to reach the computers by theoretical discoveries of men who,

over the centuries, believed in the possibility of creating tools to

enhance human intellectual capacity, and devices to replace the most

mechanical aspects of thinking man so. And always this concern

manifested itself in building mechanisms to help both in

arithmetical calculation processes as in repetitive

or too simple tasks that could be replaced

by animals or machines. Physical devices that preceded

the computer primarily analog machines that fueled the final race

until the appearance of digital computers. Older devices The first devices that

have come to help the man to calculate have their

origin lost in time. It is the case, for example,

and abacus quadrant. The first, able to solve addition

problems, subtraction, multiplication and division up to 12 integers, and that probably

existed in Babylon around the year 3000 BC. It was widely used by Egyptian

civilizations, Greek, Chinese, and Roman, having been found

in Japan, at the end of world War II. The quadrant was an instrument for

astronomical calculation, have existed for hundreds of years before becoming

the subject of several improvements. The ancient Babylonians

and Greeks, for example, Ptolemy, used various

types of such devices to measure angles between

stars, mainly having been developed from the sixteenth

century in Europe. Another example is

the industry bar for trigonometric calculations

used to determine the height for positioning

the mouth of a cannon, which was developed from

the fifteenth century. The ancient Greeks came to

develop a kind of computer. In 1901, an old Greek boat was

discovered on the island of Antikythera. Inside there was a device

(now called Antikythera mechanism) constituted by

metal gears and pointers. As Derek J. de Solla Price, who

in 1955 rebuilt along with his colleagues this machine,

the Antikythera device is “like a large astronomical clock

without the part that regulates the movement, which uses mechanical devices

to avoid tedious calculations.” The discovery of this device,

dating from the first century BC, was a total

surprise, proving that some artisan Greek world western

Mediterranean was thinking regarding mechanization

and mathematization time. Logarithms and the first

mechanical computing devices John Napier, Baron of

Merchiston, is well known for the discovery of logarithms,

but also spent much of his life inventing tools

to help in arithmetic, especially for the use of

its first logarithm table. From Napier logarithms came another great

invention developed by the brilliant mathematician William Oughtred and

published in 1630: the slide rule. Won its current form by the year 1650

(a rule that moves between two other fixed block), having been forgotten

for two hundred years, to become the twentieth century the great

symbol of technological advance, with extremely widespread use until be

replaced by electronic calculators. With the development of

the first mechanical devices for automatic

calculation, effectively starts the technological

aspects that will lead to the construction of

the first computers. Preparing the way for the full

automation of the calculation process was performed by the

efforts of these early pioneers of Computing, who saw the possibility

of mechanization but did not have the tools and materials

suitable to realize their projects. Among these big names can not

fail to mention Wilhelm Schickard (1592-1635) and Pascal as

mentioned earlier and Leibniz. There are works on these inventions, and

only the primary element will be quoted the made up, as many of these ideas will be

present in some form in future computers. Almost all machines to perform

mechanical calculations of these three centuries from the XVI had six basic

elements in your configuration: • A mechanism by which a number

is entered into the machine. In the first project that was part of

another mechanism, called selector, making it something independent

in the most advanced machines; • A mechanism that selects and provides

the necessary movement to perform addition or subtraction of appropriate

amounts in the registration mechanisms; • A mechanism (typically some disks)

that can be positioned to indicate the value of a number stored in the

machine (also called register); • A mechanism to spread “goes one” for

all the digits of the register, if necessary, when one of the digits in a

result register advances from 9 to 0; • A mechanism to control

function to check the positioning of all wheels

after each addition cycle; • A ‘cleaning’ mechanism to prepare

the register to store the value zero. Architectures of computers

and operating systems The term computer architecture is the

ability to view a machine as a hierarchical set of levels that allows us to

understand how computers are organized. The first digital computers for

instance only had two levels. The first is called the level of the

digital logic, formed by valves at the beginning and then by

transistors, integrated circuits, etc. The second is called Level 1,

also called level microprogram, which is the level of

machine language, where all programming was done by zeros

and ones, and that would later be responsible for interpreting

the level 2 instructions. With Maurice Wilkes in 1951 came

another level, where the instructions were written in a more convenient

way for human understanding: the technique was to replace

each instruction of this new level of a set of instructions

from the previous level (machine level) or examine one

instruction at a time and perform the sequence of instructions

equivalent machine level. It simplifies the hardware

which now had only a minimal set of instructions, and thus

fewer circuits were needed. After that the evolution

of hardware advances with the new scientific

discoveries: almost the same time of appearance

of transistors, for example, the concept arose data bus, which

accelerated the speed of computers. At the same time appeared the

major operating systems (simply, an operating system is a set

of programs maintained on the computer all the time, freeing the

programmer related tasks directly with the operation of the machine),

such as DOS and OS, IBM. These evolved allowing new

concepts that improve the performance of

machines, such as the multiprogramming systems,

i.e., the ability to run multiple programs in parallel

on the same machine. If one of these programs

originates from a remote terminal, such a system

will be called timesharing. A significant milestone that

allowed these advances was the introduction of input and output

processors, also called channels. It motivated the

appearance of competition concepts, communication,

and synchronization: since two processors are operating

simultaneously, there is a need to provide mechanisms for synchronized

them and establish a communication

channel between them. It is the era of mainframe architectures: Support for computational

tasks and the development of applications are made in a central

area, called computer center. Terminals connected directly

to the machine are only used by people related to

the applications available. In the ’70s came the

supercomputers, machines that have innovated

in architecture. To date, the growth efficiency

of the computer was limited by technology, more specifically

for processing scalar requiring that the CPU of a

computer to finish a task to begin taking another, producing

the von Neumann bottleneck. A breakthrough came with the Cray-1

supercomputer, the Cray Research in 1971. It was the first pipeline machine,

which processor an instruction executed by dividing it into parts,

as in assembly line of a car. While the second part of the

statement was being processed, the first part of another

education began to be crafted. The subsequent evolution was called Vector

Machine or machine SIMD (Single Instruction Multiple Data) processors which work with

more than one dataset at the same time. A little later came the MIMD architecture

(multiple instructions multiple data) and appeared multiprocessor machines as the

Connection Machine, with 65,536 processors. This technology also

provided the advantage of working with existing

eight-bit expansion cards on the market and

relatively inexpensive eight-bit devices, such

as controller chips. In the search for software

IBM was the Digital Research doors to see the ability to

port your operating system a great success! CP/M for architecture 8086, but this rejected

the exclusive contract presented by IBM. So the IBM team headed

for Microsoft Office, who hoped to get a

version of BASIC and ended up signing a contract not only this

software but also on the operating system. Microsoft acquired and increased an

operating system 8086 from Seattle Computer Products – QDOS – licensing it to IBM,

which began to market it with PC-DOS name. The years of the 1980s could

be characterized by software improvement – both operating

systems and utilities: spreadsheets, text editors,

and others – to the DOS standard and the development

of a market clones of different types of

machines that would be able running programs

designed to the standard. Apple continued to be satisfied

with its Apple II family, although failing in the

introduction of the Apple III and formidable LISA, the first attempt

to popularize the combination of mouse, windows, icons and

graphical user interface. But the price of US $ 10,000.00

startled and amazed the market. The next step to be taken – not to mention the evolution

and improvement of hardware without which it

would not be possible – would be the gradual transition of

applications for DOS environment – real sea products – to a new environmental

standard, which was beginning to take shape definitive, and starred in the beginning of

a new age in the history of microcomputers: the Windows operating system, which has

become the dominant standard for PC applications, making Microsoft a leader

in defining multimedia specifications. It is important however to make justice: Windows pattern inspired by the standard

Macintosh, released by Apple in 1984: a computer that was able

to offer more than a DOS prompt and a

character-based interface; he could have multiple windows,

drop down menus and mouse. Unfortunately, the Macintosh

was not compatible with the programs and existing applications

and was not expandable. Computing as a Science Alongside this evolution of hardware

and software, Computer opened in range and new trends have emerged within it,

incorporating these two entities. Artificial Intelligence,

Theory of Computational Complexity and Database Theory

opened new fields of study. In the 1960s computer science

has become a true discipline. The first person to receive a title of Ph.

D. from a computer science department was Richard Wexelblat at the

University of Pennsylvania in 1965. They have consolidated the studies

on the Theory of Automata and Languages Theory Formal mainly with

Noam Chomsky and Michael Rabin. The birth of the branch of

formal specifications, which introduced a new paradigm in the

development of computer systems, came in this decade, with the

beginning of the search for the correctness of programs through

the use of formal methods. W R. Floyd, in 1967, proposed that the

semantics of programming languages were defined independently of the specific

processors intended that language. The definition can be given,

according to Floyd, in terms of the method for test programs

expressed in language. His work introduced what

became known as the method of notes (assertive) Inductive

for verification (test) program and one involving

technical “sets with sorting well founded to prove

the end of a program.” An extension of Floyd’s ideas was

proposed by C. A. Hoare in 1969. Hoare made an axiomatic

theory of programs that allows the application of

the method of invariants Floyd programs texts

expressed in programming languages whose semantics

is precisely formulated. This work has become even

one of the foundations of what was later called

“structured programming”. Dijkstra developed the

idea that the definition (in the style proposed

by Hoare) can be used for the derivation (synthesis) of

a program and not just for your check. From these studies emerged

Software Engineering, which aims to ensure the correctness

in building systems. The development of computer systems so

far was done in a way almost handmade. There was some criteria

guidance during the process. This turned out to be fatal,

as revealed certain studies, developed in the 1970s on

the development of systems: lack of correctness and consistency,

low quality, extremely costly maintenance because of problems not

detected by the absence of a validation stricter requirements do not reuse code,

implementation of missed deadlines as a result of errors detected over

this same implementation phase, etc. Obeying a greater degree of formalization,

they appeared as a first reaction to this informal approach models and

system development methods called structured, which actually are sets of

rules and regulations that guide the various stages of system development

and the transitions between them. It is the systematic approach. Yet here is not a formalism

defined with precise rules. Prototyping and object

orientation are approaches that can be

considered systematic. The rigorous approach already has a

formal linguistic system to document the development stages and strict rules for

the transition from one stage to another. There is no requirement

that the correctness of statements of

changes be made. The spread of computer literacy When history look back and study

the twentieth century, among other things, you realize that, from a

scientific point of view, they are characterized as times in

which it produced a technological acceleration and a breakthrough

in unprecedented communications. Not easy to find similar historical

situations to technological expansion witnessed in these

last fifty years of the century. After the iron revolution, electricity,

petroleum, chemical, came the revolution supported in electronics

and development of computers. From the seventies

began the large-scale integration of television,

telecommunications and information technology,

in a process that tends to set up integrated

information networks, with a communication

matrix based on digital information, with great

ability to serve data, photos, graphics, words,

sounds, images, broadcast on various printed and

audiovisual media. One can even say that, in a

sense, the media are being suppressed because everything

is becoming electronic. The integration of the media also

generates a progressive fusion of the intellectual and industrial activities

in the field of information. Journalists from newsrooms of major

newspapers and news agencies, artists, student community, researchers

are working on a computer screen. In some societies, such as the

North American for example, almost 50% (1955 data) of the economically

active population is dedicated to industrial, commercial,

cultural, social and informational related to collection, processing

and dissemination of information. There is an increased

informational efficiency every day, and cheapen increasingly

technological costs. Not forgetting that the

computer, unlike other machines (handling, processing or

transporting matter and energy) manipulates, transforms and

carries a much cleaner element and less consumer of

energy and raw materials. It opens thus the door to a growth

of almost unlimited information. Since you are dealing mainly

about the evolution of ideas and concepts that led to the

emergence and development of computer science, we

can now speak of a larger supra-concept, a result that

Computer helped catalyze: the emergence of the Company Information. Without wishing to enter the

theme, worthy of a unique work and historical,

anthropological, sociological and even psychological

implications that are beyond the present scope, two

considerations will be made: the problem of excess

information and the danger of impoverishment that can be caused

by using improper computer.