Welcome to this issue of the ok bet casino Dev Talk Recap series. This article recaps an interesting talk by Natalie Pistunovic who spoke about the development of AI and MLOps.
What you will learn:
How the concept of AI became an academic field and how it evolved over time
Useful input to advance your development process in the future
About the speaker:
Natalie Pistunovic is a learner, a Google Developer Expert for Go, an OpenAI ambassador, a public speaker, and a sailor. When she’s not working on robust systems at Aerospike, she is organizing the GopherCon Europe and Cloud Nein conferences, and the Berlin chapters of the Go and Women Techmakers user groups. Previously, she was an Engineering Manager, Software and Hardware Engineer, and Co-Founder of a mobile start-up. In her free time, she is wondering if there is life on Mars. Recently she spoke at the ok bet casino DevOps Day.
A brief history of AI
Descartes and Galileo Galilei regarded thoughts, tastes, odors, and colors as symbolic representations and perception as an internal process so you could say that the seeds of AI have been planted a long time ago.
In 1956 a group of scientists gathered for the Dartmouth summer research project and worked on the topic of artificial intelligence. In general, this is seen as the birth of AI as an academic discipline.
This started the era known as “Symbolic AI”. During this time, access to digital computers became possible. Some scientists soon recognized that these machines could manipulate numbers and symbols, which is sort of the essence of human thought.
In the following years, computers were able to solve algebra problems, proof theorems in geometry, and even learned how to speak English. At this point, some optimistic researchers predicted that we would have access to fully intelligent machines in just a few years which made government agencies pour a lot of money into this field.
But in the mid-70s scientists realized that they failed to acknowledge the difficulty of their research and general intelligence is not that close. Soon public criticism rose and that caused the funding to slowly decrease which brought the first “AI winter”. During this period not much was happening in this academic field.
In the 80s it became useful again. This time not as a general AI but as a narrow AI ( back came the funding and the progress in the field of neural networks and semiconductors enabled the development of practical technology.
When Apple’s and IBMs desktops became more powerful in the 80s, the much more expensive and less performant dedicated AI hardware again decreased in popularity thus starting the “second AI Winter”. These developments led to the collapse of the whole industry related to AI as the financing stopped and these costly machines were impossible to maintain.
In the 90s, when computers were strong and cheap enough, AI was used again to solve specific problems, rather than general ones. Researchers began to develop sophisticated mathematical tools more than they ever had in the past. One remarkable event in this period would be when the deep blue computer, which was developed specifically for chess, beat the world champion, Garry Kasparov in 1997.
Today we are in the era of AGI (Artificial General Intelligence) and Big Data. All the different industries have a lot of data to process, and they are increasingly using AI for their business logic.
Brief history of AI models
In 2010 IBM founded their DeepMind AI and soon after that started collaborating with Google. While they continued their research agenda, they had some amazing achievements like managing and optimizing the energy consumption of Google’s data center. The most recent attainment is called alpha fold, a system that accurately predicts the shape of proteins.
The next important model would be OpenAI (http://openai.com/), which was founded in 2015. This is an AI research and deployment company with the mission to ensure that artificial general intelligence benefits all of humanity.
In 2017 Google founded a division dedicated to artificial intelligence with the task to advance the state of the art, work in the field and apply AI to products also as ok bet casino domains. This was called GoogleAI.
Some more noteworthy models would be the GPT-1, GPT-2, and GPT-3.
Where’s AI today?
The last GPT version, which was published in 2020, operated with 175 billion parameters. Switch Transformer, an open-source project by google brain, launched one year later in 2021 and already had 1,6 parameters. So, this was a big step but it’s worth mentioning that not everybody can just go ahead and use it as you need special hardware to support all the features.
Another more recent launch was Wu Dao 2.0 () by the Beijing Academy of Artificial Intelligence. It operates with 1,75 trillion parameters and is therefore exactly 10 times larger than OpenAIs GTP-3, which is trained with 750 billion.
Nvidia collaborated with Microsoft to create Megatron-Turing (http://www.microsoft.com/en-us/research/blog/using-deepspeed-and-megatron-to-train-megatron-turing-nlg-530b-the-worlds-largest-and-most-powerful-generative-language-model/). This model scales its parameters back to 539 billion.
The next big thing in AI development will be GPT-4 which is planned to operate with 125 trillion parameters. That fact becomes more interesting when considering that the human brain also has roughly the same number of synapses.
Another development worth mentioning would be OpenAI Codex (). It’s a system that translates natural language to code and is the model that powers GitHub Copilot.
With the rise of ok bet casino types of AI engines, a ok bet casino type of development was introduced called “Prompt Engineering”. This field deals with how you phrase your request to the machine. It’s a very useful research field, as it tries to teach a universal way of connecting with AI engines
When looking for a programming language to interact with these systems, Go () is probably the best bet. It is not just very useful and practical for writing backend code, it’s also a wonderful language for infrastructure according to Natalie. For example, you can create a binary and cross-compile that to any environment with just one flag. A lot of modern infrastructures are written in go like Kubernetes and Docker.
According to Natalie, No-Code and AI-Generated Code are two areas of the near future.
Let’s talk about some benefits of AI-enhanced coding. Repetitive parts of the development process, like testing or types, will be automated.
If you are not sure how to solve a problem, you can get inspired by the autocomplete suggestions and consider more alternatives by asking the machine. Another benefit would be the automated comments as writing them yourself can get very time-consuming and there is always the risk of human failure to overlook an important code line.
At last, Natalie advises to apply for access to AI engines now and practice with MLOPS as these areas will only get more relevant soon and can help you to advance your coding.
Thank you for reading this article. If you are interested in hearing Natalie Pistunovich herself you can do so by following this link: http://sitaracuisine.com/en/videos/mlops-and-ai-driven-development
About the author:
Benedikt is a media-technology student, computer hardware enthusiast, and proud dog dad. His mind is always on the latest tech ok bet casinos and how to make use of them. Currently, he is doing an internship at ok bet casino.
Ready to take charge of your dev career?
Join Europe's leading job platform for software developers!