In cooperation with Google, the Museum of Computer History of the source code of the neural network Alexnet, which in 2012 launched today’s prevailing access to AI. The source code is available as an open source on the Github Chm page.
What is Alexnet?
Alexnet is an artificial neural network created to recognize the content of photographic images. It was a development in 2012 to the then postgraduate students of the University of Toronto Alex Krizhevsky and Ilya Sutskever and their faculty advisor Geoffrey Hinton.
Hinton is monitored as one of the fathers of deep learning, a type of artificial intelligence that uses a neuron network and is the basis of today’s main AI. Simple three-layer neural networks with only one layer of adaptive weights were first built at the end of the 50th year. (This explanator gives more details about how neural networks work.) Especially scientists needed networks with more than one of the adaptive weights, but there was a good way to train them. In the early 1970s, neural networks were largely rejected by scientists AI.
In 1957, Frank Rosenblatt (shown with Charles W. Wightman) developed the first neural network The Perceptron.Division of rare and manuscript collections/library Cornell University
In the age of 80, the neural network research was revived outside the community of AI by cognitive scientists at the University of California in San Diego under the new name “Connectism”. After completing your Ph.D. In 1978, Hinton became a postdoToToTorate in UCSD, where he worked with David Rumelhart and Ronald Williams. The three Rediscored backpropagation algorithm for neural network training and in 1986 published two posts that show that they allow nural networks to learn more layers of features for language and vision tasks. Backpropagation, which is now the basis for deep learning, uses different between the current output and the required network output to adjust the weight in each layer, from the output layer back to the input layer.
In 1987, Hinton joined the University of Toronto. Hinton’s work and the work of his postgraduate students from Toronto outside the Centers traditional AI have made a center of deep educational research in the upcoming decades. One of the postdo -championship students of Hinton was Yann Lecun, now the main scientist in Meta. When working in Toronto Lecun, he showed that when backpropagation was used in “convolution” neural networks, they became very good in recognizing handwritten numbers.
Imagenet and GPU
Despite these progress, neural networks could not land other types of machine learning algorithms. They needed two developments outside the AI to prepare the way. The first was the emergence of extremely greater training data available through the website. The second was sufficient computing power to perform this training in the form of 3D graphics chips known as GPUs. Until 2012, the time for Alexnet was mature.
The Imagenet Imagenet FEI-Fei Li, completed in 2009, was key in Alexnet training. Here (right) speaks with Tom Kalil in the Museum of Computer History.Museum Douglas Fairbairn/Computer History Museum
The data needed for Alexnet training was found in the Imagenet, the project launched and the management of Stanford Professor FEI-FEI LI. She and her postgraduate students began collecting pictures found on the Internet and classifying them using the taxonomy provided by Wordnet, databases of words and their relationships to each other. Given the immenseness of their task Li and its collaborators eventually gave the task of marking images to workers at the concert using the mechanical Turkish platform Amazon.
Completed in 2009, Imagenet was larger than any previous data set of images by several orders. If it hoped that its availability would encourage new breakthroughs and in 2010 the competition started to encourage research teams to improve their image recognition algorithms. In the next two years, however, the best systems only made marginal improvements.
The second condition needed for the success of neural networks was the economic approach to a large calculation. The neural network training includes many repeated matrices of matrices, preferably performed in parallel, which is designed. Nvidia, co -founded by CEO Jensen Huang, led the journey in 2000 to become generalizable and programmable for 3D graphics applications, Espertuly with the CUDA programming system published in 2007.
Imagenet and Cuda were, like the neural networks themselves, a relatively specialized development that awaited the radiation of the right circumstances. In 2012, Alexnet joined these elements – neural networks, large data sets and GPUs – for the first time with the results of Pathbreaking. Each of them needed another.
How Alexnet was created
At the end of 2000 years, Hinton’s students began to use the GPU to train neural networks for image and speech recognition. Their first achievements came in speech recognition, but the success of Weld’s picture shows deep learning as a general AI solution. One student, Ilya Sutskever, believed that the performance of neural networks would be scalaned with a number of available data and the arrival of Imagenet gave the opportunity.
In 2011, Sutskever convinced a colleague of Grad Alex Krizhevsky, who had an eager ability to eliminate maximum performance from the GPU to train a convolutional neuron network for Imagenet, and the hinton served as the main investigator.
Alexnet used the NVIDIA GPU launched CUDA Code trained on the Imagenet data file. NVIDIA CEO Jensen Huang was appointed Fellows 2024 CHM for its contributions to computer graphics chips and AI.Museum Douglas Fairbairn/Computer History Museum
Crisis has already written the CUDA code for convolutional nerve gpu nVidia called Cuda-convnettrained to a much smaller Picture of the CIFAR-10 data file. He expanded CUDA-Convnet with multiple GPUs and other functions and authorized to imagine. Training was carried out a computer with two NVIDIA cards in the bedroom Krizhevsky in his parents’ house. During the next year’s race, he was constantly improving the network parameters and retraining him with the performance performance of his competitors. The net would eventually be named Alexnet after crisis. Geoff Hinton summed up the project Alexnet in this way: “Ilya thought we should do it, Alex did it, and I got it Nobel Price. ”
Krizhevsky, Sutskever and Hinton wrote a document on Alexnet, which was released in autumn 2012 and introduced crisis at a conference on computer vision in Italy Florence in Italy. Scientists of veteran computer vision were not convinced, but Lecun, who was at the meeting, said it a turning point for AI. He was right. Before Alexnet, almost none of the front papers of computer vision used nerve networks. Then almost everyone would.
Alexnet was just the beginning. In the next decade, neural networks would proceed to synthesize credible human voices, defeat GO players champions, and generated a work of art, culminated in the release of Chatgpt in November 2022 by Openai, a company that is Sutskever.
Release the Alexnet Source Code
In 2020, I approached crisis to ask about the possibility to allow HP to release the source code of Alexnet because of its historical importance. He connected me with the Hinton, who was working on Google at that time. Google owned Alexnet after acquiring DNNNRESEARCH, Hinton, Sutskever and Krizhevsky. Hinton got the ball in motion by connecting the Chm to the right team on Google. The World of Chm with the Google team for five years to negotiate the release. The team also helped us identify the specific version of the Alexnet source code to relax – there are many versions of Alexnet over the years. There are other reporters of the code called Alexnet on Github, but many of them are re -elaboration on the basis of famous paper, not the original code.
Chm is proud to present the source code to the 2012 Alexnet version, which transformed the field of artificial intelligence. You can access the source code on the GITHUB CHM page.
This post originally appears on the blog of the Museum of Computer History.
Confirmation
Special thanks to Geoffrey Hinton for providing his offer and reviewing the text, Cade Metz and Alex Krizhevsky for further clarification and David Bieber and the rest of the Google team for their work in providing the source code.
From the articles of your site
Related articles around the web
(Tagstotranslate) Museum of Computer History