top of page

A Brief History of AI


By Anika Gupta



“I don’t think there’s a job you can have today where you’re not going to feel the impact of AI, in some shape or form, over the next five, ten years.”

 

Recently, while on a LinkedIn stalk, I was intrigued to see something other than the increasingly frequent “I am pleased to accept…”, “I am happy to accept…” and “I am thrilled to accept…” on my home page. Blackstone’s post, from which the quote is taken, caught my attention. From then, I have begun dwelling on the seemingly all-consuming concept, which is artificial intelligence (AI), and how it has revolutionised, and will continue to revolutionise, the business world.

 

Undeniably, new Large Language Models such as ChatGPT have changed the way we think, research and learn. ChatGPT has now become a household name, often being used as a search engine and being able to come up with answers to any question you throw at it. (Trust me, it has got you covered!) This year, market leaders are forecasting generative AI to grow and to start making profound impacts on the way business is conducted. Businesses are already beginning to adapt to this fast-paced technological climate and are beginning to implement AI into business strategies and processes already.

 

With AI looking like it is going to take over the business world (and world in general) in due course, it has prompted my exploration into what AI really is and how it all began.

 

In a nutshell, artificial intelligence is a speciality within the field of computer science concentrating in developing machines and machine learning software that can mimic human intelligence; including their ability to absorb knowledge and problem solve. Intriguingly, it is not a three-toed sloth which comes up as the initial search when you research “ai definition” on Google! The basic concept of AI actually traces back thousands of years, as far as 400 BCE, stemming from ancient Greek philosophers. Allegedly, in ancient times, inventors ideated and attempted to make “automatons”, mechanical objects that had the ability to move independently, without human intervention. Indeed, from ancient Greece, the word “automaton” comes and means “acting of one’s own will.” Nevertheless, while AI’s conceptions are rooted in the ancient world, it has only really been in the mid-20th century since AI has come into fruition.

 

A Harvard University blog illustrates that before 1949, computers were extremely expensive, (e.g. leasing one could rack up to $200,000 a month!), and significantly lacked a key prerequisite for intelligence. Computers couldn’t store commands, only deliver them: they “could be told what to do but couldn’t remember what they did.” In 1950, British mathematician and scientist Alan Turing, began properly exploring AI and its implications, and published ground-breaking ideas on its functions and use. His logical framework was based on the idea that if humans can use information to make decisions, why can’t machines? Additionally, Turing’s paper “Computer Machinery and Intelligence” proposed a means of testing machine intelligence called The Imitation Game (later The Turing Test) which experts began experimenting with. (Side note: yes, research for this AI reflection has involved watching Benedict Cumberbatch’s 'The Imitation Game' – would highly recommend a re-watch!)

 

Anyways, in the 1950s, prominent strides took place. In 1952, American computer scientist Arthur Samuel pioneered the first successful programme to ever learn a game independently (checkers!). Further, the official term “artificial intelligence” was coined and became popularised after American computer scientist John McCarthy held a workshop on the field at Dartmouth.

 

From the late 1950s to 1970s, AI development went through a period of rapid growth and flourished. Machine learning algorithms improved, and computers also became cheaper and more accessible. Additionally, at this time, the use of AI began seeping into mainstream culture, with concepts (often dystopian in nature) of robots having a mind of their own being explored in books and films.

 

In the 1980s and 1990, despite very short “dry spells” of low public interest in AI, overall, impressive breakthroughs were made which saw AI being implemented into everyday life, as seen through the introduction of the first commercially-available speech recognition software by Windows in 1997 and the release of the Roomba autonomous robotic vacuum cleaners in 2002 (right after the Y2K era!).

 

Now, in the 21st century, AI is everywhere. The creation and use of habitual AI tools such as search engines, and virtual assistants has soared and will continue to grow. Using tools such as Siri has become second-nature for much of smartphone using world, and it seems like dependence on generative AI (e.g. when sending messages or emails) is only increasing. The use of AI in the business world is both a blessing and a cause of concern. AI has the potential to increase productivity and efficiency in the workplace, by leveraging tech to automate routine tasks and free up human time. Nonetheless, serious ethical problems concerning privacy and security, machine-learning bias and accountability come into question and need careful consideration if AI is to become common place in corporate environments. The future of AI is unpredictable, yet exciting. Ultimately, companies need to gear up for, and be adaptable to, the inevitable benefits and challenges AI will bring and embrace it in all its forms.

 


Side note: this post was not generated by ChatGPT XD

 

 

 

 

 

 

 

 

Recent posts
Archive
bottom of page