The world of modern technology buzzes with jargon and argot like “artificial intelligence,” “machine learning,” “Thunderbolt” and more. We hear these words all the time, but average consumers might not understand their meanings.
These words often represent the forefront of technology: self-driving cars, funny robots on YouTube that get kicked around, or other incredibly new developments. Machine learning entails computers analyzing an example dataset to discover and understand real-world patterns. Machine learning works more specifically than artificial intelligence which describes the generalized usage of computers to perform intelligent tasks.
Machine learning appears everywhere in modern computing. Common machine learning applications include recommendation algorithms used to provide suggestions on YouTube and Netflix. If you’ve ever found yourself wondering why your YouTube recommendations are so strange, you now share something in common with the engineers that designed the algorithm. In 2016, Google published a paper titled “Deep Neural Networks for YouTube Recommendations” which describes machine learning as “more art than science.”
Algorithms like YouTube’s prove difficult to perfect because once the training model is created, the engineers have no control over the algorithm. To change the algorithm’s functionality requires retraining. Additionally, selecting an appropriate dataset to train the algorithm on requires great care: example data would have limited resemblance to actual data, but training with organic data from usage is difficult because users are unpredictable. Using the same algorithm for an entire user base is inherently inaccurate because real people differ so widely.
Apple is at the forefront of bridging the gap between training and usage with its Neural Engine introducing a new way to handle machine learning. In September, they released the new A14 Bionic chip for the iPhone 12 and iPad Air, bringing machine learning to consumer electronics like never before. Rather than relying on aggregated data from the cloud, algorithms run entirely on-device, learning from each individual user. For example, iPadOS 14 introduces a new feature called Scribble which can convert handwriting to text or correctly drawn shapes; Scribble will learn on your handwriting alone, allowing it to become more accurate as you use it more. This comes with the added benefit of improved privacy: there is no need for your data to be sent to the cloud.
While machine learning is abstract to most users, more tangible development is the adoption of USB Type-C in consumer electronics. Drastic changes like the release of a new connector can be expected to be a long process; the specification was released in 2014 and has a long way to go before it can be considered mainstream. Of course, it is easy to question why we need yet another new cable, which is exactly what makes Type-C so different: it is the connector “to kill all other connectors.” Previous USB standards have only had 4 pins: 2 power lines and 2 data lines. This is enough for most simple devices like mice, keyboards, flash drives, and the like. Type-C has 24 pins, enabling it to carry audio and video on top of the normal data and power. In combination with the recent release of Intel’s Thunderbolt 4 – a data transfer standard for Type-C devices – one cable can be used to connect to displays, high-speed hard drives, intelligent fast chargers, and even external graphics cards or networking equipment. As it currently stands, devices with Thunderbolt are significantly more expensive than their normal Type-C counterparts, but as both standards are adopted, prices will go down.
From cables to CPUs to self-driving cars, there is a lot of tech to look forward to in the coming years. We already live in an incredibly advanced world; the possibilities for the future are endless.