Patterns in toast: How algorithms make machine learning work

Published on the 05/01/2017 | Written by Donovan Jackson

algorithms make machine learning work
icon_feature

FURTHER READING

Oracle_Red_Rock_John_Mazenier

A tech insider’s digital transformation journey

September 29, 2017 | Donovan Jackson
It doesn’t just happen to other industries - technology vendors and partners are affected, too…
Fail_incubators_mentors

Start-up accelerators, incubators, and mentors are epic fails

September 4, 2017 | Ajeet Khurana
Experienced Indian angel investor, advisor and mentor Ajeet Khurana makes harsh criticisms and valuable observations which resonate in Australasian markets…
Digitisation for beekeepers

Digitisation could deliver big buzz for beekeepers

July 13, 2017 | Donovan Jackson
Spatial intelligence and more seeks sweet spot with honey producers…
women in tech

‘When I grow up I want to work in IT’

April 5, 2017 | Angela Barnett
Said the young girl never…
technology in schools

Technology no panacea in delivering education

February 27, 2017 | Anthony Caruana
Much has been said about the use of technology in schools, writes Anthony Caruana, but some principles just don’t change…
Ever seen Jesus in a piece of toast? How algorithms and machine learning mimic the mind…

The human mind constantly seeks patterns in the raw data which comes in from the senses. Most of the times, those patterns add up to useful information which guides our behaviour. In other instances, the patterns might be there, but the interpretation of them can lead us astray. When Jesus appears in a piece of toast, for example, a false positive has been produced which relies to an extent on the framework within which the disciple interprets the image. The faithful see Jesus and ascribe meaning to the apparition. The rest see a coincidence and carry on with breakfast.

Called pareidolia, this phenomenon is useful in understanding how algorithms, machine learning (ML) and artificial intelligence work: at the most fundamental level, pointed out Andrew Peterson, data scientist at Soltius, an algorithm is just a repeatable set of sequential rules designed to solve a particular problem – sort of like the frameworks and rules which society imprints on our minds so we can interpret goings on and make decisions on them.

“For example, you could write an algorithm for getting to work in the morning: 1) get out of bed when alarm clock rings; 2) go to bathroom and shower; 3) eat breakfast… At a more sophisticated level, every computer program consists of a few to many algorithms, and some people might even say that a complete program is itself an algorithm,” he said.

It goes from the straightforward to the complex pretty quickly, as Peterson noted that AI and ML, broadly considered to be on the bleeding edge of computing today, are all based on algorithms. “If algorithms did not exist, then neither would these technologies,” he said.

Practical applications of algorithms span almost every aspect of life today, continued Peterson. “That goes from the rules we use to add two numbers together all the way to airline navigation systems and the ‘bits’ that make something like BabyX possible. In many respects, the world that humans have created would be impossible without algorithms. Anything that involves technology of any kind will, at some point, depend on a few to millions of algorithms. Even a light bulb will depend on algorithms for its manufacture, distribution, sale, and the infrastructure required to enable it to produce light in our home.”

Supervised or unsupervised
Chris Auld, director of digital experience at Microsoft NZ, added that in the context of ML, an algorithm is a mechanism that learns from data.

“There is two broad types of ML: supervised and unsupervised. With the former, we present the algorithm with data examples which are labeled. For example, take an algorithm for identifying the position a person would play in a rugby team. We’d provide context by taking photos and inputting the weights and measurements of thousands of rugby players, identify the positions they play and present that information to the ML algorithm and it will learn what characteristics make up certain players.”

From the context provided, the machine will make decisions when it is shown pictures of ‘uncategorised’ people and identify which position that individual is likely to be best for. Auld has, in effect, provided an example of an algorithm which can perform the basic task of a school selector setting out to put together a rugby team from novice players.

By contrast, he said, unsupervised ML isn’t given any instructions on how to categorise the raw material fed into it. Instead, the computer itself starts looking for patterns and making ‘decisions’ to categorise data. In the rugby player example, explained Auld, the algorithm would identify short, stout people and cluster those into a group. Lanky people, another group. By contextualising the groups with known characteristics, the machine could eventually start making decisions about what position any group of people would be best suited to.

“An algorithm is therefore a mechanism for a computer to learn from being presented with data,” said Auld.

He said algorithms are some derivative of a decision tree, a concept which he said anyone can understand – and which is rooted in basic ‘if-then’ logic used to group items; in supervised ML, the logic is effectively input by a person (‘an expert builds the decision tree’). “That’s the mechanism by which we [as humans] can learn patterns in a data set with which we are presented. Seeing Jesus in a piece of toast is well within that gambit; it’s just a case of ‘overfitting’ the information with which we are presented with a context which is probably not valid.”

Overfitting, added Auld, is a common issue in ML applications. In statistics, overfitting results when sample sizes are too small to draw reasonably accurate conclusions. “There are many techniques we apply to avoid overfitting; one of the best is to have a huge amount of data.”

A solved problem?
We are and have been surrounded by algorithms for rather a long time – but ML and AI have zoomed into vogue of late, with claims (by Microsoft for example) that these long-thorny challenges are today ‘solved problems’. “What’s driven the re-emergence of AI and machine is the availability of huge amounts of data. It’s not particularly new algorithms; AI is fundamentally predicated on the ability of machines to learn from environments and data and for that, you need massive amounts of raw material,” said Auld.

That’s what cloud computing has enabled; it has ‘freed’ data to an extent, while the commodity availability of massive computing power and storage capacity makes the ability to simply mess around with ML and AI a reality.

Peterson said there is a degree of truth in the claim that ML and AI are solved problems from a technical perspective. “But only in terms of the problems that humanity has tried to solve with existing technology. There are many problems we face that no one can solve with technology – at least for now. Furthermore, even though Microsoft may reckon ML and AI are ‘solved’, there is still tremendous inertia among sectors of society to adopt these technologies for their own benefit.”

Inertia like ‘this is the way we’ve always done things’, or ‘this is going to destroy jobs’, for example.

Better interfaces (and limitations)
One of the lasting challenges of any sort of technology, from a tractor to an iPad, is the human/machine interface. Progress has certainly been made: two year olds would likely find it easier to use an iPad than they would a tractor.

That’s one area where Peterson sees value for ML and AL, but he also noted that ‘uncanny valley’ is definitely still a thing. “Improvements in the human-machine interface will be helped by algorithms in general, but moving in the direction of avatars like BabyX as the principal way of interacting with a machine is, I believe, a long way from public acceptance – at least if people know they are interacting with a machine.”

It’s a perception issue; despite the fact that we are and have been surrounded by algorithms for most of our lives, when machines make overt decisions it can be unnerving – something content strategist Max Johns discovered when producing a chatbot for Australia’s NAB Bank . “If they don’t know it’s a machine then I’m sure no one will have a problem with it. But when I watch the BabyX video, I have to admit that it makes me feel somewhat unsettled. People are driven much more by what they feel than what they think, and if their interaction with a machine makes them uncomfortable then they will be resistant to it.”

To an uncertain future
Microsoft’s Auld said the most sophisticated ML will be either entirely or largely built on unsupervised learning. The major reason? “We do not have the manpower to label and present the enormous amount of available data to ML algorithms. And we’re seeing that already; for example Cortana has an API that you can call in the cloud, present with image, and it will tell you what is in that image. It has learned that largely through an unsupervised process.”

For propellerheads like Auld, this seemingly trivial capability is nothing short of breathtaking. Think of it in the context of a newborn baby: the child cannot identify anything at all, much less act on it, and indeed, to merely function in society, a massive amount of learning has to take place. To identify, for example, a man in a yellow kayak going over a waterfall, requires knowledge of a vast number of constructs.

It is, therefore, anything but trivial. “”Only a handful of organisations can provide a service like that. To do it, you need hyperscale cloud to train the algorithm; a data set of Facebook scale. In fact, it isn’t something an organisation can do on its own – you have to leverage collective data sets.”

There could be something sinister at play, as no less than Stephen Hawking has fretted that ML and AI could get so good it could herald the end of humanity (others believe climate change is the likely mechanism).

Be that as it may, Auld said what excites him most about algorithms and ML is the increased digitisation and collectivisation of data and the incentives to use it. “Enormous benefits will flow from the growing numbers of contributors, human and machine, to these data sets.”

And for Peterson, algorithms and ML provide a means of understanding the complexity of the world in a way that is not possible without them. “Most people focus on headline grabbing applications like BabyX and robotics, but there are so many other valuable ways these tools and technologies can be used to improve understanding and lives.”

Peterson said that ultimately, Stephen Hawking is probably correct. “There is a possibility for some type of new ‘industro-technology’ where machines can easily outperform humans. It’s already happening now – almost every day I’m developing applications based on advanced statistics and ML that easily outperform people by orders of magnitude for even simple problems, but again, adoption is slower than some people believe. Nevertheless, I don’t think we will see any major revolutions soon, but that could be quite different in 20 years from now.”

With any luck at all, interesting images in toast, too, will still be a part of that uncertain future.

Post a comment or question...

Your email address will not be published.

Time limit is exhausted. Please reload CAPTCHA.


Follow iStart to keep up to date with the latest news and views...