Skip to content

DeepSim 2.3 Release

We are pleased to announce the release of version 2.3 of the minds.ai DeepSim platform. This release is now available to our customers and partners and contains new features, improvements and DeepSim examples in practical applications. The full changelog can be found in the release notes. This blog highlights the features of this latest release.

Notable new and improved features

Scalability Improvements

The scalability of the platform has been improved. DeepSim has now been validated for:

  • 2,000 training workers per single run.
  • 2,000 training workers for Hyperparameter Optimization (HPO) runs.
  • 15,000 training workers active on the backend cluster operating on a set of training runs.

Training with this many workers ensures a quick development and time to market when designing RL applications. Exploration of the parameter space, using the automatic HPO, guarantees the most optimal trained agent. The usage of spot instances, innovative cost reduction methods and autoscaling ensures the cost of these large runs stays manageable.

Neural Architecture Search

Neural Architecture Search (NAS) is an automated method to find the best performing neural network architecture. By using smart algorithms, DeepSim will automatically find the network that is most suited for the customer’s application. This feature enables the end user to effectively use DeepSim without the need to have any expertise in Neural Network architectures. Instead the user can focus on tuning a small set of parameters which are directly relevant to the application. This makes DeepSim even more accessible to subject matter experts. The documentation contains various examples on how to best use this method for both vision-based as well as sensor-based applications.

Google Cloud Platform support

Apart from running on Microsoft Azure, DeepSim now also has support for Google Cloud Platform (GCP). This expansion allows our customers who are already invested in GCP to deploy the scalable training backend to be deployed in their existing cloud environment thereby allowing you to keep all your data, compute infrastructure and related items within an existing GCP account. All DeepSim features that are supported on Microsoft Azure have been made available on GCP.

Other Improvements and Features

  • Microsoft Windows support has been extended and now allows you to use all the features of the platform, including agent training, on Windows-based systems.
  • DeepSim can now be obtained via the Azure Portal/Marketplace. Contact your minds.ai representative if you are interested in this option.
  • Flexible stopping criteria have been added. You can now train for a fixed wall clock time, number of iterations, reward value, etc. or a combination of them. This allows you to train the agent exactly to the accuracy or runtime steps required.
  • ONNX export has been added. Trained agents can now be easily exported to application specific deployment HW.
  • Updated documentation and additional examples:
    • An example that shows how to use DeepSim for job scheduling applications has been added.
    • Detailed explanations on the theoretical basis for the working of the supported reinforcement learning methods work is now included.
    • Best practices and tips & tricks for designing reward functions.
    • Extensive support and documentation on how to perform small scale training runs on the user’s local systems has been added.

Related

How can we help?

Reach out below - we’d love to hear more about how we can help you.

We use cookies and similar technologies to enable services and functionality on our site and to understand your interaction with our service. By clicking on accept, you agree to our use of such technologies for analytics. See Privacy Policy

Leave this field blank