在线时间:8:00-16:00
迪恩网络APP
随时随地掌握行业动态
扫描二维码
关注迪恩网络微信公众号
开源软件名称:Gluon-API开源软件地址:https://gitee.com/mirrors/Gluon-API开源软件介绍:The Gluon API SpecificationThe Gluon API specification is an effort to improve speed, flexibility, and accessibility of deep learning technology for all developers, regardless of their deep learning framework of choice. The Gluon API offers a flexible interface that simplifies the process of prototyping, building, and training deep learning models without sacrificing training speed. It offers four distinct advantages:
Gluon API Reference
Getting Started with the Gluon InterfaceThe Gluon specification has already been implemented in Apache MXNet, so you can start using the Gluon interface by following these easy steps for installing the latest master version of MXNet. We recommend using Python version 3.3 or greater and implementing this example using a Jupyter notebook. Setup of Jupyter is included in the MXNet installation instructions. For our example we’ll walk through how to build and train a simple two-layer neural network, called a multilayer perceptron. First, import import mxnet as mxfrom mxnet import gluon, autograd, ndarrayimport numpy as np Next, we use train_data = mx.gluon.data.DataLoader(mx.gluon.data.vision.MNIST(train=True, transform=lambda data, label: (data.astype(np.float32)/255, label)), batch_size=32, shuffle=True)test_data = mx.gluon.data.DataLoader(mx.gluon.data.vision.MNIST(train=False, transform=lambda data, label: (data.astype(np.float32)/255, label)), batch_size=32, shuffle=False) Now, we are ready to define the actual neural network, and we can do so in five simple lines of code. First, we initialize the network with # First step is to initialize your modelnet = gluon.nn.Sequential()# Then, define your model architecturewith net.name_scope(): net.add(gluon.nn.Dense(128, activation="relu")) # 1st layer - 128 nodes net.add(gluon.nn.Dense(64, activation="relu")) # 2nd layer – 64 nodes net.add(gluon.nn.Dense(10)) # Output layer Prior to kicking off the model training process, we need to initialize the model’s parameters and set up the loss with # We start with random values for all of the model’s parameters from a# normal distribution with a standard deviation of 0.05net.collect_params().initialize(mx.init.Normal(sigma=0.05))# We opt to use softmax cross entropy loss function to measure how well the # model is able to predict the correct answersoftmax_cross_entropy = gluon.loss.SoftmaxCrossEntropyLoss()# We opt to use the stochastic gradient descent (sgd) training algorithm# and set the learning rate hyperparameter to .1trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': .1}) Running the training is fairly typical and all the while using Gluon's functionality to make the process simple and seamless. There are four steps: (1) pass in a batch of data; (2) calculate the difference between the output generated by the neural network model and the actual truth (i.e., the loss); (3) use Gluon's epochs = 10for e in range(epochs): for i, (data, label) in enumerate(train_data): data = data.as_in_context(mx.cpu()).reshape((-1, 784)) label = label.as_in_context(mx.cpu()) with autograd.record(): # Start recording the derivatives output = net(data) # the forward iteration loss = softmax_cross_entropy(output, label) loss.backward() trainer.step(data.shape[0]) # Provide stats on the improvement of the model over each epoch curr_loss = ndarray.mean(loss).asscalar() print("Epoch {}. Current Loss: {}.".format(e, curr_loss)) We now have a trained neural network model, and can see how the accuracy improves over each epoch. A Jupyter notebook of this code has been provided for your convenience. To learn more about the Gluon interface and deep learning, you can reference this comprehensive set of tutorials, which covers everything from an introduction to deep learning to how to implement cutting-edge neural network models. License |
请发表评论