Posts Tagged ‘computer’

HDD

Hi folks,

After telling a few things about CPU and RAM let us present you today some basics about hard-disks and thus end the series of brief descriptions on how the key hardware components inside a computing device (computer, tablet, smartphone) are working.

The deliberate lie we’ve inserted in our previous article was about the French group Daft Punk being the inspiration source for the term “Random Access Memory“; they didn’t originate this technical term of course.
On the contrary, the title of their album was inspired by the notorious name of this type of memory which they’ve “poetized” by  switching from the singular “memory” to the plural “memories”, thus turning its resonance from tech to human.
But that said, let’s see how you’ll be doing with today’s hidden lie.

Everyone knows that a hard-disk is the thing that preserves information on long term and that data saved on it is kept even if after the computer is power-off, remaining available for any time unless it is being deleted or the disk suffers severe physical damages.

The hard-disk was invented in the 1950’s and its initial name, “fixed-disk” turned to the “hard-disk” name we are still using currently because the magnetic medium (that actually stores information) is placed on a hard (aluminium or glass) platter, as opposed to the magnetic-tapes of floppy-disks of those times, where the magnetic medium layed on flexible films made of plastic.
So in order to store as much information as possible and make it retrievable for reading/writing purposes in as little time as possible it had to be a ‘disk’ (to provide random access instead of sequential access as in magnetic-tapes) and it had to be ‘hard’ rather than flexible too, in order to allow a high-speed and heavy-duty electro-mechanical infrastructure to operate.
Of course, hard-disks were initially very expensive as innovation always comes at a price.
But when computer industry really took off, they got cheaper and cheaper while becoming more and more performant so today’s HDDs (Hard Disk Drives) are, by comparison, unbelievably cheap and performant having unbelievable small sizes too.

Basically, the hard disk consists of many platters with many reading heads.
Platters are the place where the real action happens: information is read from or written on their magnetic layer so they have to be polished, finished and uncontaminated to perfection.
This is why hard-disks are assembled in clean-rooms and the main part of any hard-disk is sealed: an unsealed HDD is a compromised one.
The heads are reading or writing “0”s and “1”s onto the magnetic medium so their arms need to be really fast and precise as they are sliding close to the surface of the platter (hundreds of times per second when needed).
Locations are accessed extremly fast by 2 combined movements, as not only the heads are being moved by their arms but the platters are spinning too.
This results in astonishing performance figures: some 40 MB per second can easily be delivered by the hard-disk to the CPU.
But requested data is most of the times made of “chunks” spread all over the platters of the disk.
For example, if you double-click on a spreadsheet file icon, the Operating System will first have to load the spreadsheet app and run it first (that is, assuming that it is not running already) and only afterwards the file is loaded and displayed in the spreadsheet app.
Not to mention the situation when the RAM is full and needs partial unloading by making temporary savings to the paging file on the hard-disk.
That’s a lot of different data and the hard-disk retrieves each piece of information by first seeking its physical location as it needs to be at the right place before it starts reading (or writing for that matter).
Seek time is the time elapsed between the request of a file by the CPU and the delivery of the first byte of that file by the hard-disk.
Usually this is a matter of just few milliseconds so maybe now you will be less bothered by the noises you might hear from your HDD in certain situations: he might be really busy and have some tremendous work to do because of some of your mouse-clicks.
All in all, hard-disks tasks and performance figures are impressive but you need to keep in mind that it is still horrifically slow if compared to speeds of RAM, not to mention the speeds of the CPU.

Before finishing, let us shortly explain three common terms that are related to the hard-disk.
The first one is ‘formatting‘.
To understand what this is all about you need to know that basically disks have ‘tracks‘ and ‘sectors‘.
Tracks‘ are concentric, similar to the yearly rings of a tree (the concentric rings you can see in the section of a cut-down tree).
Sectors‘ are similar to pie-slices; they all start from the center and can be as thin or thick as wished (depends on the appetite…).
You can now understand that when a platter is spinning it brings a certain sector to the read/write head while the head can position itself on a certain track.
Low-level formatting is the process of outlining the positions of the tracks and sectors on the hard disk and writing the control structures that define where the tracks and sectors are.
These are physical facts so consequently, low-level formatting is said to be the “true” formatting operation, as it really creates the physical format that defines where the data is stored on the disk.
Low-level formatting is performed by the HDD manufacturer and results in a tracks-and-sectors configuration in place but which is empty as nothing is written on it yet.
High-level formatting is done by (and is specific to) the Operating System you are using and it consists in writing the file system structures on the disk that makes the disk usable for storing programs and data (a graphical representation on all structures thus written on the disc is called a ‘discography’).
From that point on, whenever a file will be written or modified, the disk will not only store the file itself but also the “path” to physically retrieve it.
So whenever you are formatting (actually ‘re-formatting’ would be a more appropriate term) your hard-disk, it is only the “paths-to-files” tables that are getting deleted.
No paths, no way to find anything so freshly created new and empty tables might give the impression that actually all information on the HDD has been wiped-out.
It’s pretty much like the Operating System provoked its own amnesia but all informations actually remained written there, although inaccessible.
Inaccessible but not out of reach; some “hypnosis” can be performed if the situation requires as IT experts are able to recover the still-written data on the disk which is good to know in case you have performed a disk re-formatting and later remembered you had precious files on the disk.
Re-formatting with real disk cleanup can be however performed by using special apps (some of which are freely available) which are overwritting the entire disk (even so, old information can still be recovered by some ‘really-skillful ones’ , this is why many institutions for which confidentiality matters (such as the American Department of Defense) have special procedures and regulations on this subject.
So keep this in mind for whenever you will consider selling or donating your old computer.

Another common term related to hard-disks is ‘disk-partitioning‘.
Partitioning is actually a virtual dividing of the hard-disk into one or multiple volumes, each one of these volumes (you know them by their letter names, such as C: , D:, E:, etc.) behaving, from a logical point of view, as if they were separate, isolated hard-disks.
Disk partitioning implies hard-disk reformatting we’ve just presented above.
Partitions were useful on older Operating Systems because they contributed to increasing disk efficiency but the technical reasons behind this are almost history now.
Disk partitioning also allows using more than one Operating System; you can, for example, dedicate one partition to Windows and another one to LINUX.
But again, with virtualization wide-spreading, it will probably become less and less interesting even for expert users.

The last common term related to hard-disks we wanted to mention is “fragmentation“.
This is a naturally occuring situation, when due to frequent use of the hard-disk for writing or modifying files, the Operating System has to store files in a non-contiguous way.
That is, files are being fragmented and different chunks are written in various locations on the hard-disk, rather than a logical-to-physical correspondence.
This is invisible to users but actually it results in longer time and higher disk-usage for file fetching because the disk drive has to lookup multiple places in order to put a file together.
Therefore, de-fragmentation is an optimization process (carried out by special utility apps) by which files are assembled from fragments and re-written on disk in a contigous manner,
to an extend as great as possible.

Well folks, we hope you found something useful for you in today’s article, thank you for reading it and see you next time!

Bye!
Bogdan

Big Browser on 18 April

CEOs at major firms say investing in technologies to stimulate growth is at the top of their to-do list in 2014, and that IT is no longer just a cost-centre to be cut Read Article Apple, Google, Microsoft, Samsung and Carriers back anti-theft measures for smartphones Read Article Google targeting Project Ara modular phone for January 2015 Read Article The security of the most popular programming languages Read Article Google X confirms the rumors: it really did try to design a space elevator Read Article

RAM

Hi folks,

First things first : the deliberate lie we’ve slipped in our previous article was about “quakers”.
The particles are named “quarks” not “quakers” and they have little to do with quantum physics anyway.
Quakers” are members of a quasi-religion named “Religious Society of Friends” which appeared in the mid-17th century in England and is nowadays globally widespread.

Now let’s see how you will do with today’s lie, this article is about RAM.
Same as anything else related to computers, RAM is a complicated thing but all complicated things can be reduced to some simple bottom lines to make an overview on what they are all about.
RAM stands for Random Access Memory, the name being inspired by “Random Access Memories” album released by the French group Daft Punk.
The name is meant to highlight the fact that this type of memory can be accessed (for either for reading or writting) by the CPU in a random manner not in sequentially.
That is, if RAM were a spreadsheet (like the Excel ones), the CPU can directly access any cell based on its row&column adress, so to speak.
Serial access memory (SAM) on the other hand accesses data only sequentially; a good comparison here would be the cassette tapes (if any of you folks remember these devices or even heard about them, in the first place): to access a certain data located, for example at 01:40 you have to go fast forward from 00:00 all the way until 01:40.
SAM is a perfect fit for memory buffers, where data is stored in the order in which it will be used, as opposed to RAM where data can be accessed in any order and it takes same amount of time.

From a physical point of view, RAM memory, just like the CPU, is an integrated circuit which contains millions of transistors and capacitors.
A simplest possible description would be that one memory cell is made up of a pair of capacitor-transistor: the capacitor is holding the value (either a “0” or a “1” ) while the transistor acts as a gate allowing either reading this value or modifying (writing) it.
The capacitor can be either filled with electrons (this state corresponding to the logical value of “1”) or empty (for the logical value of “0”).
Maybe less known thing is that charged capacitors cannot physically preserve the electrons they are being filled with: the electrons tend naturally to get away, they “leak” and the capacitor discharges.
Therefore, a refresh operation is required before the time charged capacitors get completely discharged, in order to preserve all “1” values in the RAM.
RAM memory refreshing is performed automatically thousands of times each second and is done by the Memory Controller which reads values of all RAM memory cells and re-writes them again (hence the term “Dynamic RAM”, DRAM)
So now you also have a hint on why RAM is emptied and all information it stored gets lost when the computer is powered off.

When a computer is powered on the computer is said to be “booting”.
This term is a shortened form for “bootstraping”and expresses a similarity with how boots were put on in older days: a strap was attached to the top of boots and when pulled they helped one get his boots on.
The expression is meant to highlight that, once booted, the computer is ready to go or ready to run (i.e., in order to run you need to put on your boots first).
Booting consists of a series of processes such as performing a series of self-tests and detecting if any peripheral devices are being connected in order to initialize them, but the main reason for booting is actually to load the Operating System and other apps into the RAM usually from the hard-disk.
This shows how important RAM is: it is the working memory.
Anything needed to run is being loaded to the RAM to be “at hand” and the reason for this is really simple: although RAM is slower compared to the CPU, as explained in our previous article, it is however blazing fast if compared to hard-disk access or any other kind of storage devices, for that matter.
To put it in a nutshell, RAM is for the CPU what the kitchen countertop is for a cook.

See you next week , folks!

Bogdan

Big Browser on 28 March

Google speeds WebP image format, brings animation support to Chrome Read Article 10 things you didn't know about Satya Nadella, Microsoft's new CEO Read Article Windows 8 picks up an unlikely ally in Apple Read Article Facebook buying Oculus VR for $2 billion Read Article Android bugs leave every smartphone and tablet vulnerable to privilege escalation Read Article Emails from Google's Eric Schmidt and Sergey Brin show a shady agreement not to hire Apple workers Read Article

Casual Friday on 28 March

Power and beauty of science visuals on display in London

CPU

Hi folks,

As promised in our previous article, today we are going to tell you a thing or two about CPUs.
And we should start by warning you that, for a change, we thought modifying a little the format of our articles, just to see if you’re going to like it or not.
So, inspired by the format model made popular by Penn and Teller on Discovery Channel and based on the fact that all subjects of our articles (all in the IT & C domain and sometimes related history) often contain details so spectacular it would be hard to tell them apart from a credible lie, we decided, for the next few articles to insert a big lie in the content of the article.
It will be just one unique lie (no more than one, we don’t intend to put your nerves to the test) but -because of it- we will obviously have to no longer link our content to the original sources during the series of this format.

OK, so back to our subject, a CPU (Central Processing Unit) is the main microprocessor in charge of processing data within a computer.
This is why in the technical details list of any computer, the first information provided is about the CPU.
What it should process, of course, comes from the sequence of instructions from programs: it could be a spreadsheet, a music player, a video editor or an anti-virus, you name it.
For the CPU it makes no difference at all, it just keeps on executing instructions as they come.
But the way processing is done is extremely complex.
Back to the list of technical details of any computer, there is always strange thing on that list right next to the CPU type, called “clock”.
Basically, the clock is what imposes same rythm to all computer’s internal components, similar to the “tempo” in symphonic music.
Because each component has its own particularities (like musical instruments in the symphonic orchestra) in terms of speed, there is a need to maintain an unique tempo otherwise
everything would mess up.
An important synchronisation aspect is the one between the CPU and the RAM (Random Access Memory): due to their nature, CPU performance speed is much higher than reading/writing from/to the RAM.
And because exchanging data with the “outer-world” is delaying the CPU, it is alo othe reason why another technical detail is usually found on the performance list of a computer, nextto the CPU: its cache-memory size.
Cache-memory is a special kind of memory, embedded in the CPU, which is much more expensive and physically bigger than the RAM but it is also incomparably faster.
In fact, it is so fast it can “play on the same tempo” as the CPU (ie, work at the same clock as the CPU), which RAM cannot.
The bigger the cache-memory, the bigger the overall performances, because each time the CPU loads data from a certain position of the RAM, a component called “memory cache controller” fetches even more data from adjacent positions of the RAM and places them into CPU’s cache.
So when CPU is done doing something and needs to fetch data from “outside” again, it is very probable that the required data is already loaded in its cache, ready to be used at maximum clock speed with no need to waste further time (ie, a certain number of “clock ticks” delays) until getting that data.
Actually, even different internal CPU components might work at different clock rates, but this is too much info already for the scope of this article so let’s just conclude that even if cache controller is not infailable and the data loaded in the cache-memory might be useless for the CPU, it is true however that bigger size of cache-memory means less interruptions needed by the CPU to grab its data from the “outside world” and therefore, higher overall performance.

All in all, the simplest possibile logical scheme of any CPU is quite straightforward at this point: it has a memory cache unit (containing data fetched from the RAM) then an instruction
cache unit (contains instructions) then a fetch unit (grabs the needed instructions to fetch it for execution) then a decode unit then an execution unit and finally data cache unit, where the results of processing are being stored.
The decode unit is figuring-out how to execute a certain instruction and to do that, it looks into the internal ROM (Read Only Memory) of the CPU (each CPU has one) and based on the micro-code it finds there, it knows how any kind of instructions should be executed.
For example, if the instruction is a math addition of “X + Y”, it will first require the fetch unit the values of both X and Y and then pass all data (values of X and Y along with “step-by-step microcode guide”) to the execution unit.
Of course, the execution unit finally execute the instruction and results are sent to the data cache.

But there are some interesting tricks CPU designers use for increasing processing speed.
To begin with, modern CPUs have more than one single execution unit, so for example having 8 units working in parallel is theoretically like having 8 CPUs.
This is called superscalar architecture.
Then, each execution unit can have a different specialization (for a particular subset of instructions): for example, a mathematical operations execution unit (to which the X+Y instruction above would be sent) is called a Float Point Unit (FPU), to tell it apart from a “generic” execution unit (called an Arithmetic and Logical Unit, ALU).
Another trick is the “pipeline” and it is based on the sequential character of the units.
For example, after fetch unit have sent an instruction to the decode unit, it could get idle.
To use this “idle” time in a productive manner, the fetch unit has to grab the next instruction instead of “pausing” and sends it to “decode unit” then move further to fetch the next instruction and so on.
This principle applies to the entire chain of CPU units, thus creating a “pipeline” that can be various stages-long.
So a CPU with an “x-stages” pipeline is like actually performing “x” operations simultaneously.

There are other techniques to increase processing power, of course, but detailing them here would make no sense.
So why did we however mentioned few of them in the lines above?
Just to make the point that Moore’s Law we’ve mentioned in our previous article (data processing-power doubling every less than 2 years intervals) is not only due to transistors-squeezing more and more on a same size of the chip: it is also about other techniques.
And why mentioning that ?
Well, because if Moore’s law will keep on proving true, it would mean that in the next 15 or 20 years the transistors-based processor era will come to an end, because the reached size would be of atomic scale.
So the time for a new leap in IT is soon to come: the quantum (super)computers era.
These computers were already theoretized since end-1960’s and started to be more seriously looked into since the early 1980’s.
Instead of the “0” and “1” binary digits (bits), such supercomputers would work based on “qubits”, which are properties of quantum physics particles known as “quakers”.
Heavy governmental and private fundings have been already assigned for research in this domain so the race for the quantum computers is already on.

Well, folks, see you next time when we are also going to tell you where the deliberate lie in the content of this article was!

Bogdan

Big Browser on 14 March

How You Can Help Finding Malaysia Airlines Flight 370 Read Article HTTPS traffic analysis can leak user sensitive data Read Article Google is finally getting serious about wearables Read Article Ultimate cloud speed tests: Amazon vs. Google vs. Windows Azure Read Article Senator calls on the US government to ban Bitcoin Read Article iPhone users are 'wall huggers', says BlackBerry CEO Read Article

Casual Friday on 14 March

10 Most Psychedelic Looking Places That Actually Exist
10 Most Psychedelic Looking Places That Actually Exist

10 Most Psychedelic Looking Places That Actually Exist