HomeNewsIndia NewsMATLAB and Simulink Product Families get Deep Learning Capabilities

    MATLAB and Simulink Product Families get Deep Learning Capabilities

    MathWorks introduced Release 2018b of MATLAB and Simulink. The release contains significant enhancements for deep learning, along with new capabilities and bug fixes across the product families. The new Deep Learning Toolbox, which replaces Neural Network Toolbox, provides engineers and scientists with a framework for designing and implementing deep neural networks. Now, image processing, computer vision, signal processing, and systems engineers can use MATLAB to more easily design complex network architectures and improve the performance of their deep learning models.

    MathWorks recently joined the ONNX community to demonstrate its commitment to interoperability, enabling collaboration between users of MATLAB and other deep learning frameworks. Using the new ONNX converter in R2018b, engineers can import and export models from supported frameworks such as PyTorch, MxNet, and TensorFlow. This interoperability enables models trained in MATLAB to be used in other frameworks. Similarly, models trained in other frameworks can be brought into MATLAB for tasks such as debugging, validation, and embedded deployment. In addition, R2018b provides a curated set of reference models that are accessible with a single line of code. Also, additional model importers enable use of models from Caffe and Keras-Tensorflow.

    “As deep learning becomes more prevalent across multiple industries, there is a need to make it broadly available, accessible, and applicable to engineers and scientists with varying specializations,” said David Rich, MATLAB marketing director, MathWorks. “Now, deep learning novices and experts can learn, apply, and conduct advanced research with MATLAB by using an integrated deep learning workflow from research to prototype to production.”

    MathWorks continues to improve user productivity and ease of use for deep learning workflows in R2018b through:

    • The Deep Network Designer app, which enables users to create complex network architectures or modify complex pretrained networks for transfer learning
    • Improved network training performance beyond desktop capabilities by supporting cloud vendors with MATLAB Deep Learning Container on NVIDIA GPU Cloud and the MATLAB reference architectures for Amazon Web Services and Microsoft Azure
    • Broadened support for domain-specific workflows, including ground-truth labeling apps for audio, video, and application-specific datastores, making it easier and faster to work with large collections of data

    In R2018b, GPU Coder continues to improve inference performance by supporting NVIDIA libraries and adding optimizations such as auto-tuning, layer fusion, and buffer minimization. In addition, deployment support has been added for Intel and ARM platforms using Intel MKL-DNN and ARM Compute Library.

    Available immediately, R2018b includes updates to the MATLAB and Simulink product family, including new capabilities for code generation, signal processing and communications, and verification and validation.

    ELE Times Research Desk
    ELE Times Research Deskhttps://www.eletimes.ai
    ELE Times provides a comprehensive global coverage of Electronics, Technology and the Market. In addition to providing in depth articles, ELE Times attracts the industry’s largest, qualified and highly engaged audiences, who appreciate our timely, relevant content and popular formats. ELE Times helps you build awareness, drive traffic, communicate your offerings to right audience, generate leads and sell your products better.

    Related News

    Must Read

    20 Years of EEPROM: Why It Matters, Needed, and Its Future

    ST has been the leading manufacturer of EEPROM for the 20th...

    Modern Cars Will Contain 600 Million Lines of Code by 2027

    Courtesy: Synopsys The 1977 Oldsmobile Toronado was ahead of its...

    Advancement in waveguides to progress XR displays, not GPUs

    Across emerging technology domains, a familiar narrative keeps repeating...

    Powering AI: How Power Pulsation Buffers are transforming data center power architecture

    Courtesy: Infineon Technologies Microsoft, OpenAI, Google, Amazon, NVIDIA, etc. are...

    Can the SDV Revolution Happen Without SoC Standardization?

    Speaking at the Auto EV Tech Vision Summit 2025,...

    ElevateX 2026, Marking a New Chapter in Human Centric and Intelligent Automation

    Teradyne Robotics today hosted ElevateX 2026 in Bengaluru -...

    The Architecture of Edge Computing Hardware: Why Latency, Power and Data Movement Decide Everything

    Courtesy: Ambient Scientific Most explanations of edge computing hardware talk...