The following blog is a compilation of exerpts from Open University text modules TM111 and TM129 and serves as a mean to better help me understand and remember certain physical and computing principles and ideas, and therefore all credit goes to the Open University authors of the text modules.
What our ears interpret as sound, is a whole range of vibrating air particles that travel through the air in waves. These waves generally consist of small rapid movements (or fluctuations) of the atmospheric air pressure that surrounds us. As a sound wave moves forward, it makes the air bunch together in some places and spread out in others. This creates an alternating pattern of squashed together areas (compressions) and stretched out areas (rarefactions). In other words, sound pushes and pulls the air back and forth. Sound waves are compression waves and also longitudinal waves as air vibrates along the same direction as the wave travels. The fluctuations in air pressure in air outwards from the source through the surround air, becoming gradually weaker and eventually dying away completely. Not only travels sound through air, but it can also be transmitted through other media, such as water. In the absence of a medium (i.e., in a vacuum, like space), pressure waves cannot be set up and so sound cannot travel.
A loudspeaker generates sound by moving a paper cone backwards and forwards at the required frequency. This causes a pressure wave to be transmitted through the air. The more rapidly the cone vibrates, the closer are the wavefronts, which we perceive as higher-pitch (higher-frequency) sounds. Larger movements of the cone produce greater changes in pressure which we perceive as louder (greater-amplitude) sounds.
For a listener in the vicinity of the sound source, the pressure variations act on the listener’s hearing system, causing the eardrum to move in sympathy with the source of the pressure variations. The movements of the eardrum are detected by the hearing system and are interpreted by the brain as sound.
Thus, any sound or noise we hear is in its physical form a pressure wave of vibrating air particles and a travelling wave transmitting energy. A microphone can ‘read and visualise’ this compression wave in a graph that shows the pressure amplitude and frequency of the wave. To digitalise this analogue representation of sound we need first to break up the wave into small units of time, called samples; and similar as to breaking up a picture into pixels, the smaller the time intervals of the wave form, the closer the encoding will represent the original sound. Sampling is the first part of the analogue-to-digital conversion and is followed by the quantisation stage which is the mapping of the samples to a number of discrete voltage bands (usually a 16-bit range of numbers). The larger the range of numbers the better the digital copy of the analogue original, but we can never make a perfect digital representation of an analogue quantity.
Sound has a spectrum of frequencies. The human ear can detect frequencies as low as 15 vibrations per second, and as high as 20,000 vibrations per second. Dogs can detect frequencies around 50,000 vibrations per second, which is outside the human hearing range. Bats can detect even higher frequencies of about 100,000 vibrations per second. By emitting high-frequency sound at about 50,000 vibrations per second and listening for an echo, bats can locate objects in front of them.
These frequencies, beyond the human hearing range, are called ultrasound. There are ultrasound devices which act as range finders. These devices emit a ping of ultrasound and measure the time taken to receive an echo; by using the velocity of sound, it is possible to calculate the distance to the object. This technology was used in early autofocus cameras, and devices are cheap enough to be used in hobby robots such as the LEGO Mindstorms NXT.
Sound travels through liquid and solids as well as air. SONAR is the sound equivalent of radar and is used for undersea navigation.
Laser rangefinders are also now available which work on a similar principle to an ultrasound rangefinder, but with a tightly focused laser beam their directionality and accuracy is much higher. When the laser beam is scanned back and forth, it is possible to build an image; this is called lidar – by analogy with radar and sonar.
More about Lidar and Radar, follow this link.
Optushome.com.au. (2020). Chapter 1c: sound waves. [online] Available at: http://members.optushome.com.au/scottsoftc/Chapter01/Chapter1c.htm [Accessed 25 Jan. 2020].
Slideplayer.com. (2020). [online] Available at: https://slideplayer.com/slide/7574417/24/images/4/Compressional+Wave+Transverse+Wave+compression+rarefaction+crest.jpg [Accessed 25 Jan. 2020].
Teeks99 (2011) A flow of audio from sound waves through a microphone to an analogue voltage A-D converter, computer, D-A converter, analogue speaker and finally as sound waves again [Online]. Available at https://commons.wikimedia.org/wiki/File:A-D-A_Flow.svg (Accessed at 3 November 2018).