We are pleased to announce the release of version 2.1 of the minds.ai DeepSim platform. This release is now available to our customers and partners and adds new features, improves existing ones, and adds new examples. DeepSim 2.1 is a platform used to automatically generate software for hardware control or process design. DeepSim automates parts of our customer’s workflow currently done by experts delivering both superior performance and faster time to market. Read on for the highlights of this release. The full changelog can be found in the release notes.
Notable new and improved features
DeepSim now comes with an integrated visualization library that works together with the logging and data analysis features, which allows you to inspect and interact with your training data. Visualization is one of the most requested features by our customers and strengthens the platform’s analysis capabilities by enabling customers to perform swift detailed analysis for a quicker turnaround between training runs and a faster time to solution. You can load the library inside your Jupyter notebook to perform interactive analysis operations and optionally export the results for further processing with existing Python packages.
An example of this can be found in the DeepSim usage movie and in Fig 1 and 2. Additional examples can also be found in the “FASTSim” and “visualization” tutorials that come with DeepSim.
Support for TFAgents
TFAgents is a Reinforcement Learning (RL) library developed by the Google TensorFlow team. In this release, we added the power to use this library to perform your optimization runs (next to the already included minds.ai-RL and RLLib libraries). DeepSim comes with a customized version of the TFAgents library that has support for Infiniband network backends and different distribution methods to take advantage of the latest developments in HPC to improve the scalability when training using hundreds of workers.
To simplify the evaluation, testing, and deployment of trained agents, DeepSim now includes an inference toolkit. This toolkit helps you to standardize these common operations independent of the training library used. This toolkit works together with the visualization library to help you make interactive visualizations and animations of test and inference runs. An example of this is presented in Fig 3.
Other Improvements and Features
- Dynamic reward function support, which allows you to design your reward function in your Jupyter notebook, and use it directly on the cloud.
- Improved logging and debugging methods. You can now select which metrics and results to track during the training for real-time inspecting and post-training analysis.
- Updated documentation and additional examples:
- The HEV example that uses the FASTSim simulator is now included.
- Visualization Tutorial, which helps you take advantage of all the features included in the new visualization library.
- Additional documentation to help you get started with a new optimization project, such as project checklist, simulator integration guide, and neural network selection and design guides.