Artificial Intelligence – Should It Become As Cool As This.

Lately, artificial intelligence continues to be very much the hot topic in Silicon Valley and also the broader tech scene. To those of us involved in that scene it is like an amazing momentum is building around the topic, with all kinds of companies building A.-. in to the core of the business. There has additionally been a rise in A.-.-related university courses which can be seeing a wave of extremely bright new talent rolling to the employment market. But this is simply not a basic case of confirmation bias – fascination with this issue continues to be on the rise since mid-2014.

The noise around the subject will simply increase, and for the layman it is actually all very confusing. According to whatever you read, it’s very easy to feel that we’re headed to have an apocalyptic Skynet-style obliteration as a result of cold, calculating supercomputers, or that we’re all likely to live forever as purely digital entities in some sort of cloud-based artificial world. Put simply, either The Terminator or even the Matrix are imminently planning to become disturbingly prophetic.

After I jumped to the A.I. bandwagon at the end of 2014, I knew very little about this. Although We have been associated with web technologies for over 20 years, I hold an English Literature degree and am more engaged with the business and artistic probabilities of technology than the science behind it. I had been attracted to A.I. due to its positive potential, however, when I read warnings through the likes of Stephen Hawking regarding the apocalyptic dangers lurking within our future, I naturally became as concerned as anybody else would.

Therefore I did the things i normally do when something worries me: I started researching it to ensure that I was able to understand it. More than a year’s amount of constant reading, talking, listening, watching, tinkering and studying has led me to some pretty solid understanding of what it really all means, and I wish to spend the following few paragraphs sharing that knowledge with the idea of enlightening anybody else that is curious but naively afraid of this amazing new world.

The very first thing I came across was that Cathy Hackl, as an industry term, has actually been going since 1956, and it has had multiple booms and busts in that period. Within the 1960s the A.I. industry was bathing in a golden era of research with Western governments, universities and big businesses throwing enormous quantities of money at the sector with the idea of building a brave new world. Nevertheless in the mid seventies, in the event it became apparent which a.I. had not been delivering on its promise, the market bubble burst as well as the funding dried out. Within the 1980s, as computers became very popular, another A.I. boom emerged with a similar levels of mind-boggling investment being poured into various enterprises. But, again, the sector did not deliver and also the inevitable bust followed.

To know why these booms did not stick, you need to comprehend what artificial intelligence really is. The short response to that (and believe me, there are very very long answers on the market) is the fact that A.I. is many different overlapping technologies which broadly deal with the process of the way you use data to make a decision about something. It boasts a tstqiy of different disciplines and technologies (Big Data or Internet of Things, anyone?) but the most important one is a concept called machine learning.

Machine learning basically involves feeding computers huge amounts of web data and allowing them to analyse that data to extract patterns from where they can draw conclusions. You may have probably seen this actually in operation with face recognition technology (like on Facebook or modern digital camera models and smartphones), in which the computer can identify and frame human faces in photographs. In order to do this, the computers are referencing a tremendous library of photos of people’s faces and have learned to recognize the characteristics of a human face from shapes and colors averaged out more than a dataset of hundreds of millions of different examples. This method is essentially exactly the same for virtually any use of machine learning, from fraud detection (analysing purchasing patterns from charge card purchase histories) to generative art (analysing patterns in paintings and randomly generating pictures using those learned patterns).

Goto Top