Michael Alba, Author at Engineering.com https://www.engineering.com/author/michael-alba/ Thu, 27 Mar 2025 20:04:01 +0000 en-US hourly 1 https://wordpress.org/?v=6.8 https://www.engineering.com/wp-content/uploads/2024/06/0-Square-Icon-White-on-Purplea-150x150.png Michael Alba, Author at Engineering.com https://www.engineering.com/author/michael-alba/ 32 32 Motif launches BIM collaboration app with plugins for Revit and Rhino https://www.engineering.com/motif-launches-bim-collaboration-app-with-plugins-for-revit-and-rhino/ Thu, 27 Mar 2025 20:03:58 +0000 https://www.engineering.com/?p=138124 This is “the first step of many” for the startup aiming to bring AEC software into the 21st century.

The post Motif launches BIM collaboration app with plugins for Revit and Rhino appeared first on Engineering.com.

]]>
Motif, the BIM software startup that emerged from stealth in January with $46 million in funding, has launched its first product: a cloud-based collaboration platform for engineers and architects.

The platform, which is accessed through a web browser, provides a whiteboard-like workspace for those in the AEC industry. Users can bring in text and images, write and sketch, add comments, and collaborate in real time on an infinite canvas.

The twist is that Motif goes beyond 2D whiteboarding. Users can also bring in 3D BIM models, annotating them in all dimensions. With bidirectional plugins for Revit and Rhino, the models stay up to date, and comments made in Motif go back to the source.

Motif allows users to annotate 3D models in 3D space. (Image: Motif.)

Here’s a look at how Motif’s new platform works, why it’s far from finished and how the startup is attempting to modernize BIM software.

Motif’s first step of many

When Motif announced itself to the world earlier this year, it came out swinging.

“[T]he AEC industry is using 20th century tools to design 21st century buildings,” wrote Motif CEO Amar Hanspal, formerly co-CEO and chief product officer at Autodesk, in a blog post titled The Motif Vision.

“Our mission is to revolutionize building design by merging geometry, cloud services, and machine learning to enable a dynamic, collaborative, and intelligent process,” Hanspal added.

That mission, combined with the fact that Motif’s leadership team consists entirely of Autodesk veterans, suggested that the company was gunning for the BIM heavyweight, Autodesk Revit. Now that Motif has officially launched its platform, it’s clear that a full-featured Revit alternative is still a ways away.

“This is the first step out of many,” Matt Jezyk, vice president of product at Motif, told Engineering.com.

Motif remains focused on Hanspal’s vision, but there are two reasons to take it slow, according to Jezyk. One, it’s not easy to spin up a full-featured BIM platform (who knew?). Two, even if Motif could pull a Revit out of its hat, it would take time for users to switch over.

“We wanted to come at this problem a little bit differently and solve for the collaboration part first, and then add in more and more of the modeling capabilities,” Jezyk said.

Multiple Motif users can work on the same project concurrently. (Image: Motif.)

Motif sees collaboration as an underserved part of the BIM market. Jezyk, a trained architect, has seen firsthand the hoops his peers jump through to communicate their ideas. “It’s interesting and somewhat confounding to me,” he said, “the number of times that I see people working on basically graphic design problems.” Why should an engineer with a master’s degree waste time messing around in Adobe InDesign?

Jezyk pointed out that modern collaboration tools like Miro, Mural and Figma are changing how people work together and what they expect from their software. Motif wants to meet those expectations for the AEC industry.

“The first thing that we’re coming to market with is focused on collaborating and reviewing and collecting information from the sources where people are working today,” Jezyk said.

A collaboration platform for BIM users

Motif’s real differentiator for engineers and architects is its compatibility with 3D data. The platform supports common 3D file formats including OBJ and glTF, so users can drag and drop 3D models as easily as they can a PDF or picture.

And if those 3D models come from Revit or Rhino, even better. Motif has developed plugins for those programs to create a real time link with Motif. A model created in Revit, for example, will retain all of its properties in Motif and stay up-to-date as changes are made in Revit. The link goes both ways: Comments added in Motif will propagate back to Revit.

A plugin enables live, bidirectional communication between Revit and Motif. (Image: Motif.)

Right now, Motif users can only view and mark up the data from Revit or Rhino. They can’t modify the geometry or otherwise change the data. Their comments are sent back to Rhino or Revit, but no other annotations make the journey. Jezyk says these limitations are deliberate.

“We can technically push information back into Revit and change things too,” Jezyk said. “But we’re trying to be very intentional on that to support user workflows… right now, the workflow that people seem to expect is sort of a one-way stream.”

Comments from Motif are sent back to Revit via the Revit plugin. (Image: Motif.)

In addition to Revit and Rhino, Jezyk said Motif plans to develop plugins for AutoCAD, SketchUp, Grasshopper and Dynamo.

Motif also has a feature called Frames, which allows users to create presentations directly on the web platform. Jezyk compares it to PowerPoint slides, though he emphasized that the info in Frames stays up to date as models, renders or other data changes.

Motif particularly focused on its user interface, aiming for a modern UI that looks simple but doesn’t sacrifice sophistication.

“You don’t have to be an advanced parametric design person or a coder to figure this stuff out,” Jezyk said. “You can hand this to a high level executive and they could still use these tools, but it’s sufficiently powerful enough to work for the technical staff as well.”

How to access the Motif platform

Motif’s BIM collaboration platform is now available, and Jezyk says it already has paying customers among a stable of early adopters that helped guide the platform’s development. Some of those early adopters are DLR Group, Perkins&Will, Heatherwick Studio and the Nordic Office of Architecture.

If you’re interested in trying it out, for now you’ll have to contact Motif. Later this year you’ll be able to subscribe directly through the company’s website, though Motif is still determining plan and pricing details.

Jezyk suggested that Motif will embrace a freemium model, in which a limited version of the software will be free and additional functionality will be doled out in various subscription tiers. “It’ll be comparable to some of the other online tools that are out there today,” Jezyk said.

Motif, as its CEO proclaimed, wants to revolutionize building design. This new platform may be the first step of many, but if it makes it easier for building designers to work together, then it’s a step in the right direction.

The post Motif launches BIM collaboration app with plugins for Revit and Rhino appeared first on Engineering.com.

]]>
SimScale pumps up AI simulation https://www.engineering.com/simscale-pumps-up-ai-simulation/ Tue, 25 Mar 2025 18:34:08 +0000 https://www.engineering.com/?p=138013 A new AI foundation model will give SimScale users “an instant AI prediction” for their pump designs.

The post SimScale pumps up AI simulation appeared first on Engineering.com.

]]>
Welcome to Engineering Paper. Last week I covered news from Nvidia’s GTC Conference, at which the chipmaker boasted that its Blackwell processors are making simulation 50 times faster.

There’s more Nvidia-related simulation news to go over today, starting with cloud simulation provider SimScale.

SimScale’s AI foundation model for pump simulation

SimScale announced at GTC that it has developed “the world’s first foundation AI model for centrifugal pump simulation.” It uses AI to quickly predict the results of a full simulation.

SimScale developed the foundation model with Nvidia PhysicsNeMo, Nvidia’s framework for physics-based AI. It’s integrated into SimScale’s platform via Nvidia’s Omniverse Blueprint (see the following item for more on Blueprint.)

SimScale’s new AI foundation model predicts the results of pump simulation. (Image: SimScale.)

SimScale already allows users to develop predictive AI models, but users must train those models themselves. The pump foundation model is different. Jonathan Wilde, vice president of product management at SimScale, told me that the model is already trained on thousands of simulations covering more than 50 pump models with different geometries and operating points.

“We’ve used that data to train a generic pump model,” Wilde said. “If somebody brings a pump to SimScale, they don’t need to pre-train anymore… they can get an instant AI prediction.”

SimScale took the training data from its public projects repository, which includes simulations from SimScale’s Community user tier. Those users get free, limited access to the cloud simulation platform, but their data is openly available (similar to Onshape for CAD). When I asked about the reliability of that public data, Wilde said that SimScale manually validated the simulation setups.

Paying SimScale customers can request access to the pump foundation model, but Wilde says the company hasn’t yet determined how it will license or charge for the technology. SimScale’s non-paying Community users cannot currently access any of SimScale’s AI capabilities, though Wilde said that “will almost certainly change.”

SimScale used Nvidia PhysicsNeMo (formerly Nvidia Modulus) to develop its pump foundation model. (Image: SimScale.)

This is just the first of what Wilde expects to be many AI foundation models.

“Pumps are just a starting point. We wanted to start with something that wasn’t too simple, but also not insanely complex,” Wilde told me. “Next we’ll make a valve model, and we’ll just keep iterating from there. It won’t always be CFD, we’ll add some FEA, but we’re going to try and continually build more foundation models.”

Altair integrates Omniverse Blueprint for real time digital twins

Last week Nvidia announced the general availability of its Omniverse Blueprint for real time digital twins. Altair followed up by announcing that the Blueprint is now integrated with the Altair One platform. Altair says that the new integration will give users turnkey access to Nvidia technologies including Omniverse, GPU acceleration and Nvidia NIM microservices.

What is an Omniverse Blueprint, anyways? There’s a fuzziness to how Nvidia and its partners are using the term. A Blueprint, of which there are many, is a reference application for software developers to build on. Tim Costa, senior director of CAE at Nvidia, told me last week that the Omniverse Blueprint for real time digital twins is “an open source demo” to “help our engineering solution provider partners adopt those technologies needed to provide interactive design to their customers.”

So when developers like Altair and SimScale (see above item) talk about integrating Blueprints into their software, it’s not entirely clear what they mean. Altair’s press release paints a broad picture:

“By leveraging Nvidia Omniverse Blueprint for Real-Time Digital Twins in Altair One, users can collaborate and simulate in a shared virtual environment in real time. The technology combines 3D design, AI, and ray tracing to create immersive digital environments that function as a next-level digital workspace… Users benefit from high-end rendering and streaming capabilities on the cloud that simplifies how software components work together in large systems, especially those used for AI, data processing, and graphics computing.”

Analyzing a blended wing body airplane. (Image: Altair.)

In other Altair news, the simulation developer announced that aerospace company JetZero is using Altair software to develop a blended wing body airplane, a type of aircraft that offers impressive fuel efficiency if you don’t mind feeling like you’re flying on a roller coaster.

More drama at Autodesk

Last month Autodesk slashed 9% of its workforce. CEO Andrew Anagnost said that the cuts were his decision, but some industry observers speculated that he was responding to pressure from shareholders who were loudly unhappy with Autodesk’s profitability.

Well, they’re still not happy. Starboard Value LP, a hedge fund holding more than $500 million in Autodesk shares, published a letter on March 19 expressing its concerns about “Autodesk’s long history of financial and operational underperformance” and calling for a change to the company’s board of directors. The letter acknowledges the recent staff cuts as “a step in the right direction” but adds that “substantial questions remain about the financial impact of these actions and how much benefit will ultimately be recognized in FY2026 and beyond.”

I can’t say how this corporate turbulence will ultimately impact Autodesk’s software, but if Starboard gets what it wants, Autodesk users probably won’t. What’s good for short-term profitability is rarely good for customers.

One last link

Still trying to understand what Dassault Systèmes is talking about with “3D UNIV+RSES”? I know I am. Engineering.com contributor Lionel Grealou offers some insight in Decoding Dassault’s 3D Universes jargon: combining virtual and real intelligence.

Got news, tips, comments, or complaints? Send them my way: malba@wtwhmedia.com.

The post SimScale pumps up AI simulation appeared first on Engineering.com.

]]>
Nvidia boasts 50x faster simulation at GTC 2025 https://www.engineering.com/nvidia-boasts-50x-faster-simulation-at-gtc-2025/ Tue, 18 Mar 2025 20:01:41 +0000 https://www.engineering.com/?p=137769 The chipmaker says its Blackwell processors have led to “an inflection point in engineering design.”

The post Nvidia boasts 50x faster simulation at GTC 2025 appeared first on Engineering.com.

]]>
Welcome to Engineering Paper. Nvidia’s annual GTC conference is taking place this week in San Jose, California, and with it came the usual torrent of Nvidia news.

I can’t cover it all here, but you can check out Nvidia CEO Jensen Huang’s opening keynote for two hours of the chipmaker’s strategic vision, a heap of product announcements, some special stage props, and a few self-aggrandizing video interludes.

On the simulation side of things, one of Nvidia’s top announcements was really more of a brag: the company says its Blackwell chips (which were announced at last year’s GTC) are accelerating simulation software by up to 50x.

“We saw up to 50x better performance on a Blackwell chip as compared to a leading data center CPU,” Tim Costa, senior director of CAE and CUDA-X at Nvidia, told me. “This is across a variety of important CAE workloads, from CFD to discrete element methods, finite element analysis, lithography and SPICE simulation.”

Nvidia’s announcement calls out a who’s who of CAE developers that have accelerated their software with Blackwell: Altair, Ansys, BeyondMath, Cadence, Comsol, Engys, Flexcompute, Hexagon, Luminary Cloud, M-Star, Navasto, Neural Concept, nTop, Rescale, Siemens, Simscale, Synopsys and Volcano Platforms, to put it alphabetically.

One concrete example comes from Cadence, which used a Blackwell-based server to run a 10 billion cell aerodynamic simulation of an aircraft during takeoff and landing.

“This is a problem that previously required a TOP500 supercomputer with hundreds of thousands of CPU cores running for days,” Costa said. “But this run was done on a single [Nvidia GB200] NVL72 server in under 24 hours.”

Nvidia concurrently announced the that its Omniverse Blueprint for real time digital twins, first previewed last year, is now generally available. (It’s also called OV RTDT, which my brain can’t help but read as R2D2.)

“The word Blueprint really means open source demo,” Costa said, and this one is meant to help Nvidia’s partners implement real-time digital twins. If you’re at GTC you can see some examples for yourself.

“At the show this week you’ll see real time digital twins of cars, of supersonic jets, of the human heart—that’s a really cool one—and then many other incredible applications from our partners and their customers,” Costa said.

(I’m not at the show this year, so if you see any of those things send me your pictures, thoughts and maybe a fun postcard: malba@wtwhmedia.com.)

So what’s the bottom line of all this boasting?

“Nvidia superchip architectures, combined with advances in AI physics, have created an inflection point in engineering design,” Costa said. “Grand challenges that [were] previously too complex and costly are being incorporated into typical design cycles, and interactive design with real-time digital twins are becoming a reality.”

(Image: Nvidia.)

Speaking of AI physics…

Geometry to simulation to AI

AI-based design optimization is the focus of a new integration between Luminary Cloud, nTop and Nvidia.

Luminary Cloud announced that its APIs can now create a pipeline between its GPU-based simulation platform, nTop’s computational design capabilities and Nvidia’s PhysicsNeMo framework for physics-based AI.

Together the three tools allow users to automatically create geometry, analyze it, and use that data to train predictive AI models.

“NTop generates the geometry and all the parametric changes. You could generate thousands of geometries. Those geometries are fed directly into Luminary [Cloud], which analyzes the physics and produces results,” Juan Alonso, CTO and cofounder of Luminary Cloud, told me.

“Then, leveraging the Nvidia NeMo ecosystem to train models… you could use that model in lieu of the full simulation. Even though we’re very fast, inference from these models is even faster.”

While both Luminary Cloud and nTop offer generic geometry tools, Alonso said the integration will particularly benefit users that routinely rely on fluid or thermal analysis, such as in the automotive and aerospace industries. The developers demonstrated the integration by optimizing the lift and drag characteristics of a flying wing (see below image).

(Image: Luminary Cloud / nTop.)

Bradley Rothenberg, CEO of nTop, provided a few more details and images in a LinkedIn post yesterday.

This is the first official collaboration between Luminary Cloud and nTop, but both companies have worked with Nvidia before. Luminary Cloud and Nvidia jointly demonstrated a virtual wind tunnel last November when Nvidia announced Omniverse Blueprints for real-time digital twins (now generally available; see above item). Last September, nTop announced a separate Nvidia integration and an investment from Nvidia’s venture capital arm, NVentures.

Ansys integrates Omniverse

In other Nvidia news, Ansys announced that it will integrate Nvidia Omniverse in some of its simulation software, starting with Ansys Fluent for fluid simulation and Ansys AVxcelerate Sensors for sensor simulation. Other Ansys apps will follow, according to the developer.

The Omniverse integration will allow Ansys users to render photorealistic models directly in the Fluent or AVxcelerate Sensors interfaces, which Ansys says will facilitate simulation data preparation and communication. PyAnsys, Ansys’ collection of Python packages, will further allow users to automatically format simulation data for their own applications built on Nvidia Omniverse.

(Image: Ansys.)

“The integration of Omniverse technologies within Fluent allows us to visualize complex physics simulations that give us and our customers intuitive insight into how our equipment operates in stunning detail,” said Andrew Hobbs, director of advanced technologies at Astec Industries, in Ansys’ press release.

One last link

Are ants smarter than humans? The evidence is mounting. Read Mark Jones’ Technical thinking: Finally, proof that the Andy Letter was right to prepare for the upcoming war with Paratrechina longicornis.

Got news, tips, comments, or complaints? Send them my way: malba@wtwhmedia.com.

The post Nvidia boasts 50x faster simulation at GTC 2025 appeared first on Engineering.com.

]]>
Renesas’ $5.91B Altium acquisition bears fruit https://www.engineering.com/renesas-5-91b-altium-acquisition-bears-fruit/ Tue, 11 Mar 2025 16:42:19 +0000 https://www.engineering.com/?p=137533 Renesas 365, Powered by Altium, will be a cloud-based solution for electronic system design.

The post Renesas’ $5.91B Altium acquisition bears fruit appeared first on Engineering.com.

]]>
Welcome to Engineering Paper and this week’s batch of design and simulation software news.

First, thanks to all the readers who wrote in about last week’s column, in which I covered Backflip’s new mesh-to-CAD AI tool. Clearly I wasn’t the only one who was intrigued by it.

Reactions generally fell into two camps:

  • Wow, how cool is that! or
  • AI is one step closer to killing us all.

Which side are you on? Foil my attempts to achieve Inbox Zero by sending your opinions to malba@wtwhmedia.com.

And now the news.

Renesas announces Renesas 365, Powered by Altium

Semiconductor manufacturer Renesas has announced the first fruit of its $5.91 billion acquisition of EDA developer Altium in 2024. Renesas 365, built on the Altium 365 platform, will be released in early 2026 as a new solution for electronics system development.

Screenshot of Renesas 365. (Image: Renesas.)

According to Renesas’ announcement, the new solution “connect[s] Altium’s advanced cloud platform with Renesas’ comprehensive embedded compute, analog & connectivity, and power portfolio… [to] streamline workflows, accelerate time to market, ensure digital traceability and real-time insights, and improve decision-making from concept to deployment.”

Renesas will showcase live demos of Renesas 365 at Embedded World 2025 in Nuremberg, taking place this week from March 11 to 13.

Questions and answers on 3DLive, Dassault’s Apple Vision Pro app

This summer Dassault Systèmes will release 3DLive, an app for Apple’s Vision Pro spatial computing headset that will connect to the 3DExperience Platform.

What? How? Why?

I asked all those questions (though in slightly more syllables) of Tom Acland, CEO of 3DExcite at Dassault Systèmes. He explained 3DLive’s capabilities, the benefits it will bring to users and its place in Dassault’s vision of 3D UNIV+RSES.

“The VR thing’s been done before, but this is a next-generation capability for putting people inside the model,” Acland told me.

Don’t miss the full Q&A with Tom Acland on Engineering.com.

CoreTechnologie improves its CAD simplification software

CoreTechnologie has released a new version of 3D_Evolution Simplifier, its software for CAD model reduction. The update adds rule-based automation features that CoreTechnologie says will make it easier to prepare models for simulation, digital twins, product catalogues, virtual reality and other applications that benefit from simplified 3D models.

Illustration of the mesh reduction function in 3D_Evolution Simplifier. (Image: CoreTechnologie.)

Features of the updated 3D_Evolution Simplifier include the shrinkwrap function, which filters out internal components; the bounding shape function, which replaces detailed parts with simplified substitute bodies; the mesh reduction function, which CoreTechnologie says can reduce mesh sizes by up to 98%; and more.

Kisters releases 3DViewStation v2025.0

Kisters has released the 2025 version of its CAD viewing software, 3DViewStation. The biggest update is a simplified user interface. With a reorganized ribbon menu that groups related functions together, 3DViewStation 2025 will require fewer mouse clicks and allow users to be more efficient, according to Kisters.

Screenshot of 3DViewStation v2025.0. (Image: Kisters.)

3DViewStation 2025 also adds the ability to organize views into groups, allowing users with large numbers of views to more easily navigate between them.

Autodesk cuts 9% of workforce

Late last month Andrew Anagnost, CEO of Autodesk, sent a memo announcing a massive 9% cut to the company’s workforce, totaling around 1,350 employees.

Anagnost wrote that the layoffs are a response to shifting corporate strategy, evolving investments in AI and cloud technology, and increasing economic and geopolitical uncertainties. “This decision was made by myself and CEO staff and is not the result of any third-party pressure,” he wrote.

Best of luck to all those affected.

One last link

I leave you with a brief reflection on humanity and its inventions from my colleague Lisa Eitel at Design World: A 1993 mystic on the nature of AI.

Got news, tips, comments, or complaints? Send them my way: malba@wtwhmedia.com.

The post Renesas’ $5.91B Altium acquisition bears fruit appeared first on Engineering.com.

]]>
3DLive on the Apple Vision Pro: Q&A with Tom Acland https://www.engineering.com/3dlive-on-the-apple-vision-pro-qa-with-tom-acland/ Thu, 06 Mar 2025 18:52:44 +0000 https://www.engineering.com/?p=137401 3DExcite’s CEO explains how Dassault Systemes’ visionOS app works and why it’s a crucial part of the next-generation 3DExperience platform.

The post 3DLive on the Apple Vision Pro: Q&A with Tom Acland appeared first on Engineering.com.

]]>
Dassault Systèmes recently announced 3DLive, an upcoming app for the Apple Vision Pro headset that will bring spatial computing to users of the 3DExperience platform.

Scheduled for release this summer, 3DLive is part of Dassault’s next-generation concept of “3D UNIV+RSES”, a strategy that leans heavily on the merging of physical and virtual reality.

To learn more about 3DLive, Engineering.com sat down with Tom Acland, CEO of 3DExcite at Dassault Systèmes. He explained how the visionOS app works, why Dassault chose to collaborate with Apple, and how 3DLive fits into the 3D UNIV+RSES strategy.

Tom Acland, CEO of 3DExcite. (Image: Tom Acland via LinkedIn.)

The following transcript has been edited for brevity and clarity.

Engineering.com: What’s 3DLive all about?

Tom Acland: The release that we’re making in the summer of this year really consists, from a product perspective, of two components. There’s the 3DLive app, which is going to be available on the Apple Vision Pro. It’s the way that people access the information which is published from the 3DExperience platform.

The other half is the ability to create use case focused scenarios to help people in business collaborate with each other. And that tool chain is resident on the 3DExperience platform. So using the components which are on 3DExperience platform, you can aggregate different pieces of the virtual twin which are relevant in the context of a particular use case.

Is that a new app within 3DExperience?

We’re leveraging technology which was already there on the 3DExperience platform, but we’ve been able to extend it to make the experiences that you publish spatially accessible.

Specifically, there’s an app called Creative Experience which is part of the Experience Creator role. And that app has been available for many years already. It’s typically used by engineering teams who need to explain the value of what it is that they’re doing. It’s also available in the 3DExperience Works portfolio as Product Communicator.

[Related: How to use 3DExcite’s Creative Experience]

So Solidworks users will be able to use this tool?

Yeah, and they already use it today.

For what?

You can craft experiences for use in a 2D context. You can also generate portable content from the experience that you’ve created. So for example, if you need to create specific content which you’re going to use on your website, or animations, videos, those things, those can also be generated from the same application and using the same tool chain.

Could you tell me more about 3DExcite?

3DExcite is one of the Dassault Systèmes brands. It helps manufacturers take their products to market. So we help our manufacturing clients express the value of the inventions that they’re coming up with on the 3DExperience platform.

Obviously a big part of that is the storytelling. So how does this particular innovation help the people that it’s designed to serve? In a Solidworks world, for example, where you have people making machine tools, you have a similar challenge. How do I show my customer what it is that I’m developing?

So is 3DLive a marketing tool?

Well, if you think about traditional marketing, that’s often tied up with advertising. But as these products become more sophisticated, for example, more software defined, the way that you show the value of the product to a customer is not just through advertising. You have to be able to illustrate and explain new features.

For example, you’ve just released a product over-the-air. You might need some content which appears in the app which goes with the product, so that users can understand this new feature. So you can create advertising content, but you can also create content which is useful for end users, and that’s really the key.

What we’re seeing because of software definition and the speed of change is that it’s increasingly important that you define what the value is for the customer as early as possible. So you could look at this as a way of capturing requirements from a customer-centric perspective. So you’re not just writing things down, you’re modeling what the outcome of that experience is going to be so that you can show it to someone: “Is this what you want?” You can engineer it and then make sure that your engineering matches what you’re aiming for.

So being customer-centric is not just about communication outwards, it’s about communication inwards to everyone who’s building that product, so that everyone understands what it is we’re trying to make.

Dassault Systèmes’ promo video for 3DLive.

How closely did you work with Apple to develop the new app?

The idea goes well back before the collaboration with Apple. But what is special about the technology that Apple has developed for spatial computing is that you have a very powerful set of capabilities on the Apple Vision Pro, in terms of processing, in terms of sensors, in terms of the OS, which allows you to deliver those experiences in a very true-to-life fashion. And they’re easy to use.

The collaboration with Apple goes back over a year. They were actually at 3DExperience World last year. They came to visit us. We’d already started conversations. And it’s been a journey that’s been going on for over a year to work out exactly how 3DExperience can interact with and work with the Apple Vision Pro.

I think people sometimes talk about these things generically as a headset, right? But we see the Apple Vision Pro as not just another headset. It’s a different type of capability, which is a function of the hardware, but also the software which is powering those kind of experiences. So we don’t really see this as a case of just swapping out one headset for another. The VR thing’s been done before, but this is a next-generation capability for putting people inside the model.

How so? How does this Vision Pro app compare to VR experiences on other headsets?

There are a whole lot of specifics about the Apple Vision Pro capabilities which I’m not going to go into myself, but I’ll tell you about the benefits in terms of what the difference is. If we’re talking about the use cases which are typically addressed in VR today in conjunction with the 3DExperience platform, you’re often talking about design type situations where you’re looking at the exterior shell, the physical design of the product. And that’s typically a function of configuration, materials and geometry.

[Related: Should engineers buy the Apple Vision Pro?]

What we’re doing with the Apple Vision Pro is radically different, because you’re actually looking at all of the facets of the interaction with that thing, including kinematics, including systems information, and putting that in the context of an end user benefit. So it’s a much richer experience that you can create, and you can really get a sense of how the thing that you’re building is going to help the people it’s designed to serve. It’s not just a tool for designers. It’s a tool for everybody who needs to understand the benefit of a particular process or a particular product itself.

So this isn’t an existing capability being ported to a new headset?

No, it’s an entirely new thing. And it’s just the start. The whole idea that we’re trying to address in working with the Apple Vision Pro on the 3DExperience platform is a pillar of gen seven.

[Gen seven refers to 3D UNIV+RSES, “the seventh generation of representation of the world introduced by Dassault Systèmes”.]

So it’s a strategic aspect to the next-generation of the 3DExperience platform, which is designed to help people design better products to deliver more value to their customers, but also help customers understand what it is that they’re getting. If you’re selling a robot, for example, the customer may not understand how the robot’s made, but they want to understand how the robot’s going to fit their specific use case. So it’s as much to help the customers understand the value of the product that’s being engineered as it is a tool for the engineer to make a better product.

How will users access the 3DLive app, and what will it cost?

In an enterprise context, if you’re deploying Apple technology, you typically have an enterprise app store. Your devices themselves are often managed through device management, so you have a very similar experience to what you would have as a consumer, but the applications available to you as a user of an enterprise are curated by your IT department. And that’s using standard Apple technology for making iPhones, iPads, etc. part of the enterprise ecosystem.

So the app is going to be available to people by those means, on the enterprise app store for companies who’ve deployed this process. And there is no additional charge expected for having that app available in that way. Sign in through your 3DExperience ID and it’s up and running.

How you then discover those experiences, how they’re organized, is part of the value of the process. It’s not just the experience itself, it’s how you access it in context, so that people who are part of that work group can look at the things that they need to see together.

Do you plan to bring this technology to other XR headsets akin to the Apple Vision Pro, like Samsung’s Project Moohan?

The idea of spatial computing—or sense computing, as we call it, because we think it could become broader in the next 20 years—is going to be a very emerging field. So there may be other technologies by Apple or by other people which are relevant. And of course we want to embrace the best of the market to be able to execute on the Dassault Systèmes vision for sense computing.

That said, there’s something unique about the level of integration in the Apple stack. This is my personal view. If you are able to combine that very, very sophisticated hardware with the OS, with the experiences that are deployed to that device, you can achieve completely different things than when you have, let’s say, an ecosystem where the OS is separate from the device.

The ability to create that sense of stability, where everything is locked in place, is what you need if you want to, say, walk up to a machine and press a button and the virtual system responds in the right way. That’s very, very hard to achieve if you have dozens of different devices all nominally conforming to a spec. So we see that the technology that Apple has brought to market is at the moment leading not just because of the hardware that’s inside, but because of the approach. It’s because of the fact that you’ve got that close integration between the software and the hardware on the device. It allows you to do completely different things. And we don’t really see too many other companies at the moment with that level of capability.

So we’ll see what happens with the space. It’s likely to evolve, and there’ll be new types of devices, but obviously we want to work with the ones that actually achieve the objectives of Dassault Systèmes and 3DExperience.

You gave the example of walking up to a machine and pressing a button. Is that a capability of this app?

Yes. In one of the demos there’s a training scenario that’s an example of how a maintenance engineer who’s designing maintenance procedures would create a little boot camp for an operator to run through that procedure virtually. And you can imagine that if you’re trying to get a new line stood up, or you’re trying to turn over a line, or even an entire factory, there’s going to be hundreds of those specific use cases. And in that environment, it’s very important that you have a sense of being in the place and things behave the way that they’re going to behave.

So yeah, if there is an actuator in the context of a particular instruction that you’ve got to work through, that will be active, and you’ll be able to interact with it like in the real world. Likewise, if you have a screen in there that’s going to show you your work instruction, for example, it’ll have the actual work instruction that you’re going to encounter in the real world. So you’re really trying to give people a sense of proximity to the real world so that they really understand what it is they’re going to do.

If I had something like a TV stand in the app, could I go up to it and move it up and down?

Yes, absolutely. Kinematics is one of the things that makes a big difference in terms of traditional digital content creation versus the approach we’re taking here. Because the way that you make the experience is derived straight from the CAD, it has all of the kinematics and so on available to it, to make sure that the way those things are represented are true to the engineering.

And it’s quite possible that there’s a bit of back and forth between the engineering team and the customer. Things change. You don’t want to have to go back to the start again, export all the CAD again, go through that loop, which typically takes a long time for every single engineering change. You want to be able to just update that specific aspect, like the kinematics, and then it’ll be available to you within minutes to be able to show that update to the customer.

If I’m in the headset and my colleague next to me updates the model, will that change propagate to 3DLive?

One of the other aspects of gen seven is the virtual companion. Virtual companion is about giving people superpowers through the use of generative AI or AI in general, but also about being able to automate processes that previously were done manually.

So the objective is to do exactly as you described. That those processes which are already repeatable and manageable can also be automated, so that you can essentially run those processes in the cloud fully automatically.

I can’t tell you that’s all going to be there in the summer, but that’s exactly the intent. Once you’ve created those scenarios and you’ve created the relationships between those scenarios and the CAD, you don’t need to have to come in every time and run it again manually.

What about collaboration? Could both of us be in a headset and work on the same thing at the same time?

That will be available at the release in the summer. There are still a few kinks being worked out there, but that is absolutely the idea. It’s one of the things that we see as being most in demand in those immersive environments, the ability to be colocated in a virtual space with somebody else.

You use the term sense computing. How do you see different senses being incorporated into spatial computing?

We don’t know 100% yet. I think touch haptics is probably next in terms of being able to get the idea of surface texture. I think that’s quite likely to be the next one. Smell, I’m not so sure. That’d be kind of cool, but we’ll see how long it takes us to get there.

What else excites you about 3DLive?

I think it’s the direction of where spatial computing is going and why it’s important to see spatial computing as a function of virtual twins.

At the moment we’re talking mostly about creating virtual representations of something which is going to arrive in the future. But you can reverse the polarity of that. In the future, you’ll be able to superimpose on the real world things which are coming from the virtual, so you’ll be able to actually explain to people how devices or how products are composed, how they work, by reverse engineering the real world and getting back to where the information came from.

I think that is a super exciting outlook, because you’re not just talking about going from virtual to real. You’re talking about going from real to virtual as well. And to do that you need to be able to create continuity from the virtual to the real. The devices are able to recognize things precisely because they’ve been trained on the information which is inherent in the virtual twin.

In order to be able to do that recognition, you need to be able to have a well-defined model, which will allow these spatial computing devices generally to identify objects and then associate them with information that isn’t necessarily immediately visible.

So you’re going to see the virtual and the physical worlds kind of blend together, not just in terms of engineering and design, but in terms of use, and maybe in terms of circularity. Like, what else could that thing be if I were to deconstruct it? What elements of that could I take out? How could I recycle and how could I use them some other way? That’s part of what our purpose is, to make sure that there’s more value out of less resources that get consumed.

How does 3DLive fit into the concept of 3D UNIV+RSES?

I think the underpinning construct is the idea of moving from data up to representation of knowledge. CAD or IoT information, for example, unless it’s contextualized in a scenario which is meaningful to somebody, remains a little bit abstract. It makes it a little bit difficult to leverage. What does it mean semantically? Not just as a number, but what does it mean? And also, how is that knowledge used by people to create something new? And that’s the know-how element that occurs when people work together around a set of known concepts.

So you’re not modeling just the product. You’re talking about how the product interacts with other products and people in the context of its use. What happens to it in the real world? What can we learn from its actual interactions with the real world to make the design better? And that means we have to model the context to a certain extent at the same level of fidelity as we would have typically modeled the product in the past. And that’s quite an exciting new era, because we’re going to be modeling factories, we’re going to be modeling hospitals, we’re going to be modeling any place where these products add value to people’s lives, not just the products themselves. And I think that’s a sort of a step change in how we think about designing things for the real world.

So there’s a lot going into gen seven, which is about elevating what’s been done so far on the 3DExperience platform into the era of AI by adding meaning to data through experiences like we’ve been talking about with sense computing. And I think this is going to be quite an exciting journey as these things evolve all around us.

The post 3DLive on the Apple Vision Pro: Q&A with Tom Acland appeared first on Engineering.com.

]]>
AI can now use Solidworks https://www.engineering.com/ai-can-now-use-solidworks/ Tue, 04 Mar 2025 15:43:31 +0000 https://www.engineering.com/?p=137293 Backflip’s new AI-based plug-in turns mesh data into fully parametric CAD models.

The post AI can now use Solidworks appeared first on Engineering.com.

]]>
Welcome to Engineering Paper. Today’s top story is a fascinating new release from Backflip, the generative AI startup that emerged from stealth late last year with a design platform that turns text prompts into 3D models.

Forget text-to-3D. Today Backflip announced something even more interesting: an AI model that can create parametric CAD models from mesh data. It’s available through a web app and a Solidworks plug-in, and I got to see the latter in action.

I met with Backflip founders Greg Mark and David Benhaim at 3DExperience World 2025 in Houston last week, where they showed me a demo of the new AI tool. At a conference abuzz with AI, this was the most impressive feature I saw—and it’s not just a spec on the horizon. It’s ready and working today.*

“We’ve trained a model that takes mesh data, like a point cloud, and automatically gives you a parametric part,” Mark told me.

Anyone with a 3D scanner can readily get mesh data. But meshes are mere geometry. Parametric CAD models are much richer—they’re precise, editable and manufacturable—but making them requires time and expertise.

Now, you can just use AI.

“You send the scan mesh out to our AI, which then looks at the geometry and figures out the feature tree steps that a person would do,” Benhaim said. “And then we’ll go and create that in Solidworks. Takes about 30 seconds to a minute.”

You heard it here first: AI can now use Solidworks. Backflip’s tool directly drives the CAD software, sketching, extruding, revolving, and otherwise building a parametric model to match the mesh input. Backflip generates four options for users to pick from. They’re native Solidworks models, so users can edit them as they would any other part.

The Backflip plug-in for Solidworks creates a parametric CAD model from a mesh file. (Image: Backflip.)

I watched as the Backflip AI speedily modeled brackets and flanges (the kind of simple mechanical parts on which this version of the AI was trained) but I didn’t have a chance to examine those models myself. Would they have passed muster with the many Solidworks pros surrounding us in Houston? I asked Mark and Benhaim about the quality of the AI’s work.

“We’ve taught the model how to CAD, and we’re spending the next couple of months working on how to CAD well,” Benhaim told me. Mark added that in the future, companies will be able to tune the AI by training it with data from their own CAD users.

Backflip sees this AI tool as a potential gamechanger for manufacturers, allowing for much easier repair and replacement of parts.

“Many people in manufacturing are incredibly skilled. They can build things out of wood, they can create metal, they rebuild engines, but they don’t know how to CAD. And so we’re allowing them now to scan the part and then get a 3D design out of it that you can manufacture,” Mark said.

I’ll have lots more to say about Backflip as this AI tool evolves. As always, I’m keen to hear your thoughts (yes, you!), so send them my way at malba@wtwhmedia.com.

*The day after I saw Backflip’s demo, Solidworks CEO Manish Kumar announced a similar mesh-to-parametric-model feature coming to Solidworks. He emphasized that it’s still in development and did not offer any details on a timeline.

3DExperience on the Apple Vision Pro

A late announcement from last week’s 3DExperience World 2025 conference was that Dassault Systèmes has developed an app for Apple’s spatial computing headset, the Apple Vision Pro. The app, called 3DLive, will bring data from the 3DExperience platform into a virtual environment for user-defined applications.

“Using the components which are on the 3DExperience platform, you can aggregate different pieces of the virtual twin which are relevant in the context of a particular use case,” Tom Acland, CEO of 3DExcite, told me at the conference.

3DExcite is a Dassault brand focused on marketing and sales, but 3DLive is meant for more than marketers. Acland noted 3DLive experiences will also help engineers communicate ideas internally and better convey product information to end users.

Here’s a quick Dassault video teasing 3DLive:

3DLive will be released this summer. I’ll report more from my conversation with Acland soon.

Another scan-to-CAD solution

I didn’t see a demo of this one, but there’s another mesh-to-CAD solution to report: last week Creaform announced Scan-to-CAD Pro, a reverse engineering module for the 3D scanning company’s Metrology Suite. It improves on the previously available Scan-to-CAD tool by adding 2D sketching and 3D modeling features.

According to Creaform, “the new Scan-to-CAD Pro acts like a seamless gateway between 3D scanning and CAD software, such as SolidWorks.”

(Image: Creaform.)

I can’t avoid drawing a comparison to Backflip. While Scan-to-CAD Pro may offer some nifty features to speed up the mesh-to-CAD modeling process, it’s not doing the modeling for you. Who can fault it for that? But the world is spinning fast these days, and it seems that any engineering software is one AI update away from being obsolete.

Quick hits

  • More news from 3DExperience World 2025: Dassault Systèmes announced Solidworks CPQ, the company’s first configure, price and quote solution (the announcement from Solidworks CEO Manish Kumar drew applause from the conference crowd). Solidworks CPQ will be available on the 3DExperience platform this summer.
  • Also from the conference: Dassault Systèmes launched a new initiative called Solidworks SkillForce that will provide Solidworks licenses to students participating in internships or co-op programs, provided they’ve earned a Certified Solidworks Associate (CSWA).
  • Trimble released SketchUp 2025. The developer says it offers enhanced visualization features and better interoperability with industry tools including Autodesk Revit.

One last link

Last week I wrote about my experience at 3DExperience World. Erin Winick Anthony, Engineering.com contributor and pinball aficionado, shared her own experience in On the floor at 3DExperience World 2025. If you like ice cream, basketball, or R2D2, check it out.

Got news, tips, comments, or complaints? Send them my way: malba@wtwhmedia.com.

The post AI can now use Solidworks appeared first on Engineering.com.

]]>
Battling ghosts at 3DExperience World 2025 https://www.engineering.com/battling-ghosts-at-3dexperience-world-2025/ Tue, 25 Feb 2025 18:25:29 +0000 https://www.engineering.com/?p=137063 Live updates from Dassault Systèmes’ annual user conference.

The post Battling ghosts at 3DExperience World 2025 appeared first on Engineering.com.

]]>
Welcome to Engineering Paper. Today I’m reporting live from 3DExperience World 2025, Dassault Systèmes’ annual user conference taking place this year in Houston, Texas.

Where to start? Like a good Texas barbeque platter, there’s a lot to chew on.

For one, I don’t think anyone here in Houston has gone thirty seconds without mentioning AI. (There I go doing it again.) AI was enmeshed in everything we heard about during the kickoff keynote delivered by top Dassault executives.

I’m not exaggerating: “This new generation places artificial intelligence at the center of everything we do,” said Pascal Daloz, making his first 3DExperience World appearance as CEO of Dassault Systèmes.

By “new generation,” Daloz is talking about 3D UNIV+RSES, the “seventh generation of representation of the world introduced by Dassault Systèmes.” This chart explains it all:

Dassault Systèmes: The Next Generation. (Image: Dassault Systèmes.)

In a press conference following the opening keynote, Daloz compared 3D UNIV+RSES with two similarly named concepts: Meta’s metaverse and Nvidia’s Omniverse.

“The metaverse is a virtual world which is not linked to reality,” he said, pointing out that 3D UNIV+RSES connect the real and the virtual. Omniverse does that too, Daloz said, but 3D UNIV+RSES offers “the ability to navigate across different scales and disciplines… something that Omniverse cannot do.”

I’ll keep working to unpack 3D UNIV+RSES while I’m here.

Meet Aura, your virtual companion

Daloz’s keynote also introduced two new categories of AI-based services that Dassault calls generative experiences and virtual companions.

Generative experiences are “AI-driven automation[s] for assembly, requirements, design, test, [and] validations, just to give you an example,” Daloz said.

Virtual companions are “AI assistants ready to enhance your skills [and] accelerate your workflow,” Daloz said, adding “these companions are not here to replace you, they’re here to empower you.”

We were introduced to one of those companions: Aura, a chatbot currently available in 3DSwym, a collaboration app within 3DExperience (which itself is integrated in Solidworks, since it connects to 3DExperience). Suchit Jain, VP of strategy and business development at Dassault Systèmes, told me the company is working to integrate Aura into other parts of the platform, such as the Solidworks user forums.

Aura is a virtual companion currently integrated in 3DSwym. (Image: Dassault Systèmes.)

I’ll report more on my conversation with Jain soon, so if you’re not subscribed to Engineering Paper, ask your virtual companion to remedy that.

Lots more AI coming to Solidworks

During the second-day keynote, Solidworks CEO Manish Kumar ran through a list of AI features planned for the CAD platform. Some of them are already available, such as predictive commands and automated drawings, while others were more speculative.

One interesting feature-to-be is generative rendering, where users will be able to quickly generate custom product renders. It reminded me of what Depix Technologies is doing to make designers cry, but we didn’t get many details during the presentation.

We got even less details about some even more interesting AI features. One was generative 3D parts, for which Kumar showed off a picture of a saw handle being converted to a 3D mesh ready for simulation. And on top of that, Kumar teased a mesh-to-3D feature that could turn those meshes into parametric 3D models—someday.

“Whether you are working with 3D scan data, imported mesh files, or legacy CAD models, this feature—” Kumar began to cheers from the crowd, “once fully developed, I must say—will provide a seamless way to transition from complex mesh geometry to native parametric features.”

Demo of mesh-to-3D, a feature coming (someday) to Solidworks. (Image: Dassault Systèmes.)

Stay tuned for more on the AI features coming to Solidworks.

Battling the ghost of Solidworks past

One more item from Houston: Solidworks is celebrating its 30th anniversary this year. The popular CAD program debuted in 1995 and has come a long, long way since then.

I know because I got a chance to use Solidworks 95 on a delightfully retro desk setup in the 3DExperience World exhibit hall. Attendees could earn swag if they completed a modeling challenge in the original software.

Solidworks, Furby and me: three great products of the 90s.

It took me a full 10 minutes to make the simplest part imaginable, but I’m now the proud owner of a Solidworks stress cube, some Solidworks socks, a Solidworks 95 collectible CD-ROM (if only I had a CD drive), and best of all, a Solidworks Tamagotchi.

Here’s to 30 more years of Solidworks (though I suspect if Solidworks is still around in 30 years, something will have gone horribly awry with Dassault’s AI vision).

And now, some non-Dassault news…

Altair HyperWorks 2025 now available

Altair has released HyperWorks 2025, the latest version of its design and simulation platform. Among other updates, the new release includes:

  • New transformer-based physics predictions models that Altair says will improve simulation accuracy even with limited data
  • Altair CoPilot, an AI chatbot in Altair Inspire (in beta)
  • New automation tools, including Python APIs
  • A new simulation service called Altair DSim for semiconductor functional verification
  • New physics models for particle simulation

There’s lots more to check out—you can read the full Altair HyperWorks 2025 highlights here.

NTop acquires Cloudfluid

When Engineering.com spoke with nTopology CEO Brad Rothenberg a few years ago, he asserted that his company’s generative design technology was as “capable of optimizing for fluids as it is for mechanical parts.” The company’s name has since changed (shortened to nTop), but that ambition hasn’t.

Now, nTop has taken a step towards fluid optimization by acquiring German CFD developer Cloudfluid. Their GPU-native CFD solver technology will be integrated into nTop’s design platform, which the company says will particularly benefit aerospace, defense and turbomachinery applications, all heavily dependent on fluid dynamics.

“One of the biggest bottlenecks has always been solving the physics—it takes time to mesh and converge on a solution,” Rothenberg said in nTop’s press release. “Cloudfluid solves this by integrating directly with our implicit modeling core, bringing CFD into the iterative computational design loop.”

One last link

3DExperience World 2025 isn’t the only conference making a meal out of AI. Just in case you haven’t had your fill, my colleague Michael Ouellette covered the recent ARC Industry Leadership Forum in AI and Industry 5.0 are definitely not hype, and my colleague Paul J. Heney previewed Siemens’ plans for the upcoming trade fair in Siemens to focus on AI’s power at Hannover Messe.

Got news, tips, comments, or complaints? Send them my way: malba@wtwhmedia.com.

The post Battling ghosts at 3DExperience World 2025 appeared first on Engineering.com.

]]>
AI in Onshape and on ships https://www.engineering.com/ai-in-onshape-and-on-ships/ Tue, 18 Feb 2025 19:09:19 +0000 https://www.engineering.com/?p=136833 The generative AI revolution sets sail for ship design, plus new details about Onshape’s upcoming AI Advisor.

The post AI in Onshape and on ships appeared first on Engineering.com.

]]>
Welcome to Engineering Paper, where every week we line your head with headlines about design and simulation software.

For starters, Onshape users will be excited to learn that Onshape AI Advisor is coming soon.

The product support chatbot was announced with little fanfare last September, but we haven’t heard much since then. Last week I got an update from Onshape founder and PTC chief evangelist Jon Hirschtick.

“Very soon you’re going to see us launch our Onshape AI advisor. It’s working internally in testing, and it’s going to be a vehicle for lots of cool AI features,” Hirschtick told me.

AI Advisor will provide conversational support, answering questions about Onshape features and best practices. It’s based on a commercial foundation model, but it’s trained on Onshape documentation, so Hirschtick says it will give better answers than a general chatbot like ChatGPT. Each answer will also include links to sources so users can dig deeper if necessary.

I recently described AI chatbots as the “Hello World” of AI applications—the first step, the low-hanging fruit, the “my boss said we need AI so here’s the quickest thing we can do.” That’s not a knock on Onshape AI Advisor; it may prove to be a handy tool, but I think we can all agree that AI’s potential in CAD software is much higher than a chatbot.

Hirschtick sees that potential. He described AI Advisor as “just phase one” of Onshape’s plans for AI and explained how it could evolve to directly support users, such as by writing code or even modifying geometry. Hirschtick also told me the Onshape team is exploring other AI features, including AI-based rendering and generative text-to-CAD.

You can read all the details in Onshape AI Advisor is coming soon—here’s everything we know.

Siemens partners for generative AI in ship design

Speaking of AI’s potential for design software, Siemens announced that it will collaborate with Compute Maritime to “push the boundaries of generative AI in the ship design industry.”

The collaboration will connect Siemens’ Simcenter STAR-CCM+ to NeuralShipper, Compute Maritime’s vessel design and optimization platform. NeuralShipper, which Siemens describes as “a digital naval architect,” quickly generates a fleet’s worth of vessel design options to serve as a starting point for engineering teams. Compute Maritime says the generative AI tool is trained on more than 100,000 designs spanning a wide variety of vessel types.

Examples of designs generated by NeuralShipper. (Image: Siemens.)

By connecting Simcenter STAR-CCM+ to NeuralShipper, Siemens says it will bring computational fluid dynamics (CFD) and results validation to the ship design software. “The combination… enables the creation of novel vessel types and demonstrates how designers can automate simulation processes and predict real-world performance, even for the most unconventional designs,” Dmitry Ponkratov, Siemens’ marine director for simulation and test solutions, said in the press release.

However this collaboration pans out, it certainly won’t be the first time ship designers use Siemens software. You can read about another example in Design software helping to build the largest cruise ships.

Bentley opens infrastructure award nominations

Are you, or do you know, an infrastructure project worthy of recognition?

Nominations are now open for Bentley Systems’ 2025 Going Digital Awards, an annual program honoring infrastructure around the globe. Spanning 12 categories including bridges and tunnels, rail and transit, structural engineering, and more, the Going Digital Awards will be decided by independent jurors and announced on October 15, 2025 at Bentley’s Year in Infrastructure conference in Amsterdam.

You can submit your nominations here before March 31, 2025.

Quick hits

One last link

Model-based definition (one of the many answers to the question No really, what is MBD?) is an alternative to 2D drawings that aims to imbue 3D models with manufacturing information. It’s an intriguing idea, but it hasn’t yet made it to the mainstream.

What’s holding MBD back? Engineering.com contributor Mike Thomas writes about his company’s failed attempts to implement MBD in 6 reasons we still can’t switch to MBD—and the ways forward.

Got news, tips, comments, or complaints? Send them my way: malba@wtwhmedia.com.

The post AI in Onshape and on ships appeared first on Engineering.com.

]]>
Onshape AI Advisor is coming soon—here’s everything we know https://www.engineering.com/onshape-ai-advisor-is-coming-soon-heres-everything-we-know/ Fri, 14 Feb 2025 16:38:40 +0000 https://www.engineering.com/?p=136765 Founder Jon Hirschtick explains that the upcoming AI chatbot is just phase one for PTC’s cloud CAD platform.

The post Onshape AI Advisor is coming soon—here’s everything we know appeared first on Engineering.com.

]]>
AI is coming to Onshape, PTC’s cloud CAD platform.

While some engineering software developers have been showing off AI research and making big AI promises, Onshape has been quietly plugging away on more conventional updates—like the brand new CAM Studio.

But behind the scenes, the Onshape team is as keen on AI as everyone else. In an interview with Onshape founder and PTC chief evangelist Jon Hirschtick, Engineering.com learned that Onshape has been testing several AI features and is nearly ready to release the first: AI Advisor.

“We’re very close,” Hirschtick said. “You can see the lights on the runway.”

Jon Hirschtick, chief evangelist at PTC, delivering a keynote on AI in product development at Design Conference 2024 in Croatia. (Image: Design Conference.)

Here’s what we know about Onshape AI Advisor, who will have access to it and what other AI features it may herald.

What is Onshape AI Advisor?

Onshape AI Advisor is a product support chatbot. It was announced in September 2024 as a detail in a PTC press release about a strategic collaboration agreement with cloud computing provider AWS. At the time, PTC expected to release AI Advisor by the end of 2024.

“While designing, users will be able to type a question in simple, conversational language and the Onshape AI Advisor will respond with an answer or recommendation based on the resource library and provide links to additional information,” read PTC’s announcement.

Related: Applying AI in manufacturing: Q&A with Jon Hirschtick.

AI Advisor is powered by Amazon Bedrock, AWS’s service for building generative AI applications. Bedrock offers access to a variety of foundation models from AI developers including Anthropic, Meta, Mistral AI and more.

In our interview, Hirschtick confirmed that AI Advisor is built on a commercial foundation model, but declined to name which. Regardless, he emphasized that the Onshape team has tuned it for their userbase, and that every output will cite sources and provide external links.

“We’re giving much better results than you get if you ask these same questions to ChatGPT, or Perplexity, or Copilot, or Claude, or DeepSeek,” Hirschtick said.

What kind of questions can you ask Onshape AI Advisor?

Hirschtick gave some examples of how users could interact with the new AI assistant.

“How would I create a curvature continuous boundary surface in Onshape?” one user might ask.

“What features would you recommend for modelling a remote control?” another may inquire.

These are questions a user could look up in the documentation, Hirschtick admits, but “so are half the things we ask each other.” Even experienced Onshape users may not know about all the features of the oft-updated software. AI Advisor is a way to help users discover and learn new ways to design in Onshape.

(Image: PTC.)

It may debut as a product support chatbot, but Hirschtick says that’s just phase one for AI Advisor. In the future, users will be able to ask tailored questions and get more practical output. Hirschtick gave a few more examples.

“Can you give me ideas on how to improve the performance of this model?” asks a user, who is then shown some possible solutions.

“Write a conditional operator in Onshape that says if the trailer width is less than 28 the value should be 4, if not, the value should be 5,” prompts another, and AI Advisor gives the expression in Onshape’s variable syntax.

“The first application will just be expert advice on how to use Onshape with cited sources,” Hirschtick summarized. “Future applications may involve generating expressions, maybe someday generating API calls. It could even someday modify your model, whether it’s with text-to-CAD or other[wise].”

AI Advisor for all (for now)

We couldn’t confirm the release date for AI Advisor, but we did learn which users will have access to Onshape’s upcoming AI feature.

First, the good news: Onshape AI Advisor will launch to all Onshape subscribers, including free and educational users. That wasn’t a given—the new Onshape CAM Studio, for instance, is only available to Onshape Professional and Enterprise subscribers, plus there’s an extension called CAM Studio Advanced that will cost extra for everyone.

The chatbot’s availability may change, however. Hirschtick speculated that as AI Advisor matures and expands, some of its capabilities may be segmented by subscription tier. Time will tell, but it will probably side with Hirschtick. Given the computing cost of generative AI and the business model of SaaS, it’d be surprising if PTC kept AI Advisor free forever.

Expect more AI from Onshape—someday

The soon-to-be-released AI Advisor is just the first step Onshape will take with AI. Hirschtick said the development team is actively exploring other AI features, including AI-based rendering and generative text-to-CAD.

Onshape users shouldn’t get too excited about these tools just yet. When it comes to AI, Onshape prefers patience to flash.

“We could ship tomorrow if we wanted something that’s a demo,” Hirschtick said. “The hard part is turning these into tools that pros value in pro-level use cases. And we’re working on it.”

The post Onshape AI Advisor is coming soon—here’s everything we know appeared first on Engineering.com.

]]>
What are “3D UNIV+RSES”, Dassault Systèmes’ latest creation? https://www.engineering.com/what-are-3d-univrses-dassault-systemes-latest-creation/ Tue, 11 Feb 2025 19:19:04 +0000 https://www.engineering.com/?p=136614 The better question is: what aren’t 3D UNIV+RSES?

The post What are “3D UNIV+RSES”, Dassault Systèmes’ latest creation? appeared first on Engineering.com.

]]>
Welcome to Engineering Paper, a weekly column bringing you the latest updates from the world of design and simulation software.

First some news from developer Dassault Systèmes. At least, I think it’s news. I’m having a hard time deciphering this press release introducing “3D UNIV+RSES”, the “seventh generation of representation of the world introduced by Dassault Systèmes.”

I have thoughts about that branding, but I’ll keep them to myself. +veryone can see how well a plus sign resembles an uppercase E—no need to point it out.

So what exactly are 3D Univpluserses? I’ll quote Dassault’s description:

“3D UNIV+RSES” represent a new class of representation of the world: virtual-plus-real representations that holistically combine modeling, simulation, real-world evidence and AI-generated content. They offer a unique and secured industry environment for combining and cross-simulating virtual twins and for training multi-AI engines while protecting customers’ IP.

That’s clear, right? We’re talking about something that represents a new class of representation. It combines real and non-real reality and virtuality (it doesn’t get any more holistic than that, folks). We’re talking AI and IP, all in one multi-engine secure industry environment.

I think I’m starting to understand the plus sign.

(Image: Dassault Systèmes.)

Dassault’s announcement doesn’t provide much concrete detail about “3D UNIV+RSES” and when or how users will interact with them. I’m sure the developer will have more to say at its upcoming user conference, 3DExperience World 2025, taking place in Houston in a couple weeks. I’ll be there myself to chase down some answers. (If you’ll be at the event and want to say hello, drop me a line at malba@wtwhmedia.com.)

Until then, I’ll try to read between the lines. It seems like Dassault is embracing the industrial metaverse, which was the trendy tech before generative AI stole our attention (and our collective cultural and scientific heritage). My guess is that this will be something like Nvidia’s Omniverse platform, a hub to integrate 3D assets for spatial computing and digital twin (er, I mean virtual twin) applications.

Whatever “3D UNIV+RSES” are, I suspect we’ll be hearing about them a lot over the next few years. At least until Dassault invents an eight representation of the world.

A few updates on Onshape CAM Studio

Last week Onshape announced CAM Studio, a new manufacturing workspace available in beta for Professional and Enterprise subscribers, and CAM Studio Advanced, an extended manufacturing feature set that will be available as a paid extension.

I’ve since seen a demo of CAM Studio (including Advanced) from Onshape’s Cody Armstrong, senior director of technical services, and Darren Henry, senior vice president of general operations. I still don’t have details on pricing or availability, but they did answer a few of my questions about the new manufacturing environment.

As expected, CAM Studio fits naturally into the browser-based platform. You launch it from a part and it opens in a new tab (an Onshape tab, not a browser tab), where you can switch between it and other workspaces. CAM Studio has an easy-to-follow workflow—it took Armstrong seconds to generate an example toolpath—and it includes an extensive machine and tool library that will continue to expand with Onshape’s triweekly release schedule.

It still needs time to mature, but Armstrong believes CAM Studio is a viable replacement for existing desktop CAM tools—as long as you just need milling. Turning is yet to come.

Creating a toolpath in Onshape CAM Studio. (Image: Onshape.)

I’ll write more about CAM Studio, but here are two more details for now. One, there are no additive manufacturing capabilities in CAM Studio, but Armstrong said it’s on Onshape’s radar for a future release. Two, CAM Studio provides machine simulation, but not G-code simulation. The former uses specialized CAM data, whereas the latter uses the actual code that drives a specific CNC machine. Theoretically these simulations are the same, but in reality they may not be perfectly identical.

Ansys 2025 R1 now available

Ansys announced a slew of updates across its portfolio with the Ansys 2025 R1 release. Some of the highlights include expanded thermal modeling in Ansys Discovery, support for higher mesh counts in Ansys Fluent fluid simulations, more GPU support and high performance computing (HPC) tools across applications, a better way to prepare training data for Ansys SimAI, and lots of other new features. If you use any Ansys products, you’ll probably find something of interest.

Here’s a video overview with more of the Ansys 2025 R1 updates:

For full details, see the Ansys 2025 R1 highlights.

Quick hits: HPCWorks, SpiCAT, and digital twin training

  • Altair announced Altair HPCWorks 2025, the latest update to the developer’s HPC and cloud platform. HPCWorks 2025 is now available through Altair Units, the company’s token-based licensing system. The new release also integrates with Altair RapidMiner for HPC-enabled AI and data analytics workflows, and includes the usual bout of security enhancements and performance improvements.
  • Kyocera AVX now supports supercapacitors in a new version of its SpiCAT online catalog for its electronic components. SpiCAT provides specs and downloadable 3D part models for many types of AVX capacitors including multilayer ceramic capacitors (MLCCs), polymer, tantalum, niobium and more. (If you want to learn more about the differences between these, check out The engineer’s complete guide to capacitors.)
  • The Digital Twin Consortium announced a series of training workshops to build hands-on experience with digital twins. The day-long event will take place on March 17, 2025 at the Hyatt Regency in Reston, Virginia, and feature industry speakers from Dassault Systèmes, Axomem and others.

One last link

Have you ever wondered how massive cruise ships are designed and built? Wonder no longer with this shipyard tour from cruise-aficionado and mechanical engineer Paul J. Heney: Design software helping to build the largest cruise ships.

Got news, tips, comments, or complaints? Send them my way: malba@wtwhmedia.com.

The post What are “3D UNIV+RSES”, Dassault Systèmes’ latest creation? appeared first on Engineering.com.

]]>