Artificial Intelligence - Engineering.com https://www.engineering.com/category/technology/artificial-intelligence/ Mon, 24 Mar 2025 18:00:03 +0000 en-US hourly 1 https://wordpress.org/?v=6.8 https://www.engineering.com/wp-content/uploads/2024/06/0-Square-Icon-White-on-Purplea-150x150.png Artificial Intelligence - Engineering.com https://www.engineering.com/category/technology/artificial-intelligence/ 32 32 Decoding Dassault’s 3D Universes jargon: combining virtual and real intelligence https://www.engineering.com/decoding-dassaults-3d-universes-jargon-combining-virtual-and-real-intelligence/ Mon, 24 Mar 2025 18:00:02 +0000 https://www.engineering.com/?p=137969 Can Dassault Systèmes convince the market that this is more than just another buzzword-laden evolution?

The post Decoding Dassault’s 3D Universes jargon: combining virtual and real intelligence appeared first on Engineering.com.

]]>
Product Lifecycle Management (PLM) reimagined: from static digital twins to an AI-powered, generative intelligence ecosystem. (Image: Dassault Systèmes.)

Dassault Systèmes has unveiled 3D Universes (styled as 3D UNIV+RSES for branding), a bold step toward reimagining how industries engage with digital and physical realities. This is not just another 3D modeling update. It represents a fundamental shift from static digital twins to an AI-powered, generative intelligence ecosystem. The branding itself—3D UNIVERSES instead of “3D Universes”—signals a new paradigm where virtual and real (V+R) are seamlessly integrated, enabling continuous learning, automation, and adaptability across product lifecycles.

But with this shift comes a set of key challenges: What does this mean for legacy users? How will intellectual property be managed in an AI-driven world? And can Dassault Systèmes convince the market that this is more than just another buzzword-laden evolution?

Virtual + real: more than just digital twins

The concept of V+R (Virtual + Real) is not new to Dassault Systèmes. It has been a central theme in the company’s Virtual Twin Experience, where digital twins are no longer mere representations but are continuously evolving with real-world inputs.

In 3D Universes, this vision is taken further:

  • AI-powered models learn from real-world behaviors and adjust accordingly
  • Virtual companions provide intelligent assistance in decision-making
  • Generative AI and sense computing optimize designs and simulations in real-time

This moves beyond the traditional “digital twin” approach. Rather than acting as a static mirror of the physical world, 3D Universes enables a dynamic, self-improving system that continuously integrates, analyzes, and adapts. The idea is not new. For instance, Siemens and other ‘PLM software’ providers are actively exploring opportunities for AI to add an intelligent layer to the PLM data backbone.

From static to generative intelligence

Dassault Systèmes has long been a leader in 3D modeling, PDM/PLM, and simulation, though 3D Universes marks a significant departure from traditional software functionality. It introduces an AI-driven, generative framework that transforms how products are designed, validated, and maintained.

Key differentiators from this new positioning include:

  • AI-assisted workflows that automatically refine and evolve designs.
  • Predictive simulations that adapt based on real-world sensor data.
  • A “living” knowledge platform that evolves with industry trends and user inputs.

You get the idea. Rather than designing a product in isolation, cross-functional teams, from Product Development, Engineering, Quality, Procurement, and supply chains can now co-create with AI, allowing for an iterative, automated process that reduces risk, enhances efficiency, and accelerates innovation cycles.

Beyond software—a living digital ecosystem

The shift to 3D Universes also seems to represent a move away from traditional licensing-based software models toward a consumption-based, Experience-as-a-Service (XaaS) framework—a similar commercial model per the approach recently described as “AI-as-a-Service” by Microsoft CEO Satya Nadella. This aligns with broader industry trends where companies are transitioning from one-time software purchases to continuous value-driven digital services.

What does this mean in practical terms?

  • Customers will consume intelligence rather than static software.
  • Real-time virtual twins will become decision-making hubs, constantly updating based on real-world inputs.
  • AI-generated designs will automate engineering iterations, dramatically reducing manual effort.

This is a major shift for legacy customers who are accustomed to on-premises, private cloud hosting, and transactional software ownership. Dassault Systèmes will need to provide a clear roadmap to help these organizations transition without disrupting their existing workflows and wider integration landscape.

IP, trust and the generative economy

One of the most critical challenges in this transformation is intellectual property (IP) ownership and data security. In an AI-driven, generative economy, where does human ingenuity end and machine-driven design begin? If AI generates a product variation based on learning from past designs, who owns the output?

Some key concerns include:

  • Ensuring IP integrity when AI continuously iterates on existing designs.
  • Managing security risks as real-world data feeds into digital models.
  • Addressing industry adoption barriers for companies that have built their entire business around traditional IP protection frameworks.

Dassault Systèmes, and other enterprise solution provider in this space, will need to provide strong governance mechanisms to help customers navigate these complexities and build trust in the generative AI-powered design process.

Dassault Systèmes issued a YouTube video presentation as a teaser to outline the core ambitions of 3D Universes, reinforcing its role in shaping a new generative economy—elaborating on key messages:

  • Virtual-Plus-Real Integration: A seamless blend of digital and physical data enhances accuracy and applicability in simulations.
  • Generative AI Integration: AI-driven processes enable more adaptable and intelligent design iterations.
  • Secure Industry Environment: A trusted space for integrating and cross-simulating virtual twins while ensuring IP protection.
  • Training Multi-AI Engines: Supports the development of AI models within a unified framework, promoting more sophisticated AI applications.

While the video presents a compelling vision and sets timeline expectations towards an aspirational 15-year journey by 2040, it introduces complex terminology that might not be easily digestible for a broad audience. The use of “Universes” as branding adds an extra layer of abstraction that could benefit from clearer explanations and, in due time, a gradual transition roadmap for legacy users.

Additionally, the practical implementation and real-world applications remain vague, leaving some unanswered questions about industry adoption and integration. How will companies transition to this model? What are the concrete steps beyond the conceptual framework? The challenge will be ensuring that this does not become another overcooked marketing push that confuses rather than inspires potential adopters. Users demand clarity and pragmatism in linking solutions to problem statements and practical value realization.

A bold leap into the future

The potential of 3D Universes is enormous, but its success hinges on several key factors:

  • Market Education: Dassault Systèmes must articulate the value proposition beyond buzzwords, demonstrating tangible ROI for both new and legacy users.
  • Seamless Transition Strategy: Organizations need a clear pathway to adopt 3D Universes without disrupting their current operations.
  • AI Governance & IP Assurance: Addressing industry concerns around AI-generated designs, IP ownership, ethical AI, and data security will be crucial for widespread adoption.

If 3D Universes delivers on its promise, it has the potential to redefine how industries design, simulate, and optimize products across their entire lifecycle. By truly integrating Virtual + Real intelligence, Dassault Systèmes is making a bold statement about the next frontier of digital transformation.

The question now is: Are industries ready to embrace this generative future, or will skepticism slow its adoption? Furthermore, where should organizations start on this journey? Can solution providers be bold enough to share a pragmatic roadmap towards this goal, and keep us posted on their learnings in this space? Will 3D Universes bring us one step closer to the “Industry Renaissance” previously advocated by Dassault Systèmes Chairman Bernard Charles? Time will tell, but one thing is certain—Dassault Systèmes is positioning itself at the forefront of the next industrial/digital revolution.

The post Decoding Dassault’s 3D Universes jargon: combining virtual and real intelligence appeared first on Engineering.com.

]]>
Understanding AI in manufacturing: agentic AI, causal AI, and LLMs https://www.engineering.com/understanding-ai-in-manufacturing-agentic-ai-causal-ai-and-llms/ Mon, 24 Mar 2025 16:17:42 +0000 https://www.engineering.com/?p=137968 Understanding the differences can help engineers select the right approach for specific challenges.

The post Understanding AI in manufacturing: agentic AI, causal AI, and LLMs appeared first on Engineering.com.

]]>
In manufacturing, different types of AI serve distinct purposes. Understanding the differences between Agentic AI, Causal AI, and Large Language Models (LLMs) can help engineers select the right approach for specific challenges.

Agentic AI: autonomous decision-making

Agentic AI refers to AI systems that act autonomously based on goals, real-time data, and feedback loops. In manufacturing, this can be seen in self-optimizing production lines, automated quality control, and predictive maintenance. Unlike traditional automation, Agentic AI can adapt to changing conditions and make independent operational decisions without human intervention, improving efficiency and reducing downtime.

Causal AI: understanding cause and effect

Causal AI goes beyond pattern recognition by identifying cause-and-effect relationships in complex manufacturing systems. This AI type is valuable in root cause analysis, process optimization, and failure prediction. Unlike conventional machine learning, which correlates data, Causal AI determines why failures or inefficiencies occur, enabling engineers to implement targeted solutions rather than just responding to symptoms.

LLMs: processing language and documentation

Large Language Models (LLMs), such as GPT-based AI, specialize in natural language processing (NLP). While not directly involved in factory operations, LLMs help automate documentation, generate maintenance reports, assist with troubleshooting, and provide AI-driven technical support. They can summarize engineering papers, create standard operating procedures (SOPs), and improve communication across teams.

Choosing the right AI

Each AI type complements manufacturing processes in unique ways, leading to smarter, more efficient operations.

Manufacturing Example: Causal AI vs. Agentic AI

Use Case: Predicting and preventing machine failures.

How It Works: A factory uses causal AI to analyze historical sensor data from machines.

Instead of just identifying correlations (When temperature rises, breakdowns occur), it determines the cause (Excess vibration due to misalignment leads to overheating, which causes failure).

This allows engineers to intervene proactively by fixing misalignment before a breakdown happens.

Use Case: Autonomous production line optimization.

How It Works: An agentic AI system monitors production efficiency in real-time and autonomously adjusts machine settings for optimal output.

If a machine slows down, the AI dynamically reallocates work to other machines without human intervention.

It learns from past production data and adapts continuously to maximize efficiency while minimizing waste and energy use.

Key Differences:

Causal AI helps engineers understand why failures happen and improve decision-making.

Agentic AI takes direct action, adjusting processes autonomously to optimize performance.

In manufacturing, causal AI and agentic AI serve different but complementary roles.

Hybrid approach: smart factory with both AI types

A factory can integrate both causal and agentic AI. Causal AI analyzes sensor data and finds that high humidity causes metal corrosion, leading to increased friction and machine failure. Agentic AI uses this insight to autonomously adjust humidity controls, slow down machines at risk, and reassign production tasks to avoid downtime. Over time, the system learns and adapts, reducing failures, increasing efficiency, and lowering maintenance costs.

Together, they create self-optimizing factories with minimal human intervention.

The post Understanding AI in manufacturing: agentic AI, causal AI, and LLMs appeared first on Engineering.com.

]]>
Foxconn unveils industrial LLM with 4 week training method https://www.engineering.com/foxconn-unveils-industrial-llm-with-4-week-training-method/ Tue, 11 Mar 2025 14:49:43 +0000 https://www.engineering.com/?p=137531 Originally designed for internal use, this Chinese language AI will be open sourced and shared publicly.

The post Foxconn unveils industrial LLM with 4 week training method appeared first on Engineering.com.

]]>
Multinational electronics contract manufacturing giant Foxconn has announced the launch of the first Traditional Chinese Large Language Model (LLM). Developed by its research and development arm, Hon Hai Research Institute (HHRI), the company says this LLM had a more efficient and lower-cost model training method, which was completed in four weeks.

The institute, which is headquartered in Tucheng, Taiwan, said the LLM—named FoxBrain—will be open sourced and shared publicly, but did not disclose a timeline.

It was originally designed for the company’s internal systems, covering functions such as data analysis, decision support, document collaboration, mathematics, reasoning, problem solving, and code generation.

The company says FoxBrain not only demonstrates powerful comprehension and reasoning capabilities but is also optimized for Taiwanese users’ language style, showing excellent performance in mathematical and logical reasoning tests.

“In recent months, the deepening of reasoning capabilities and the efficient use of GPUs have gradually become the mainstream development in the field of AI. Our FoxBrain model adopted a very efficient training strategy, focusing on optimizing the training process rather than blindly accumulating computing power,” said Dr. Yung-Hui Li, Director of the Artificial Intelligence Research Center at Hon Hai Research Institute. “Through carefully designed training methods and resource optimization, we have successfully built a local AI model with powerful reasoning capabilities.”

The FoxBrain training process was powered by 120 NVIDIA H100 GPUs, scaled with NVIDIA Quantum-2 InfiniBand networking, and finished in about four weeks. Compared with inference models recently launched in the market, the more efficient and lower-cost model training method sets a new milestone for the development of Taiwan’s AI technology.

FoxBrain is based on Meta Llama 3.1 architecture with 70B parameters. In most categories among TMMLU+ test dataset, it outperforms Llama-3-Taiwan-70B of the same scale, particularly exceling in mathematics and logical reasoning. Some technical specifications and training strategies for FoxBrain include:

  • Established data augmentation methods and quality assessment for 24 topic categories through proprietary technology, generating 98B tokens of high-quality pre-training data for Traditional Chinese
  • Context window length: 128 K tokens
  • Utilized 120 NVIDIA H100 GPUs for training, with total computational cost of 2,688 GPU days
  • Employed multi-node parallel training architecture to ensure high performance and stability
  • Used a unique Adaptive Reasoning Reflection technique to train the model in autonomous reasoning

The company says FoxBrain showed comprehensive improvements in mathematics compared to the base Meta Llama 3.1 model. It achieved significant progress in mathematical tests compared to Taiwan Llama, currently the best Traditional Chinese large model, and surpassed Meta’s current models of the same class in mathematical reasoning ability. While there is still a slight gap with DeepSeek’s distillation model, Hon Hai says its performance is already very close to world-leading standards.

FoxBrain’s development—from data collection, cleaning and augmentation to Continual Pre-Training, Supervised Finetuning, RLAIF, and Adaptive Reasoning Reflection—was accomplished step by step through independent research, ultimately achieving benefits approaching world-class AI models despite limited computational resources.

Although FoxBrain was originally designed for internal group applications, in the future, Foxconn will continue to collaborate with technology partners to expand FoxBrain’s applications, share its open-source information, and promote AI in manufacturing, supply chain management, and intelligent decision-making.

NVIDIA provided support during training through the Taipei-1 Supercomputer and technical consultation, enabling Hon Hai Research Institute to successfully complete the model pre-training with NVIDIA NeMo. FoxBrain will also become an important engine to drive the upgrade of Foxconn’s three major platforms: Smart Manufacturing, Smart EV and Smart City.

The results of FoxBrain are scheduled to be shared publicly for the first time during NVIDIA GTC 2025 Session Talk on March 20.

Hon Hai Research Institute, the research and development arm of Foxconn, has five research centers. Each center has an average of 40 high technology R&D professionals focused on the research and development of new technologies, the strengthening of Foxconn’s technology and product innovation pipeline.

The post Foxconn unveils industrial LLM with 4 week training method appeared first on Engineering.com.

]]>
How can engineers reduce AI model hallucinations – part 2 https://www.engineering.com/how-can-engineers-reduce-ai-model-hallucinations-part-2/ Mon, 10 Mar 2025 19:09:36 +0000 https://www.engineering.com/?p=137487 More best practices engineers can use to significantly reduce model hallucinations.

The post How can engineers reduce AI model hallucinations – part 2 appeared first on Engineering.com.

]]>
Many engineers have adopted generative AI at a record pace as part of their organization’s digital transformation. They like its tangible business benefits, the breadth of its applications, and often its ease of implementation.

Hallucinations can significantly undermine end-user trust. They arise from various factors, including:

  • Patchy, insufficient or false training data. It results in the Large Language Model (LLM or model) fabricating information when it’s unsure of the correct answer.
  • Model lacks proper grounding and context to determine factual inaccuracies.
  • Excessive model complexity for the application.
  • Inadequate software testing.
  • Poorly crafted, imprecise or vague end-user prompts.

Organizations can mitigate the risk and frequency of these hallucinations occurring. That avoids embarrassing the company and misleading its customers by adopting multiple strategies, including:

  • Clear model goal.
  • Balanced training data.
  • Accurate training data.
  • Adversarial fortification.
  • Sufficient model tuning.
  • Limit responses.
  • Comprehensive model testing.
  • Precision prompts.
  • Fact-check outputs.
  • Human oversight.

Let’s explore the last five of these mitigations in more detail. To read about the first five, click here.

Limit responses

Models produce hallucinations more often when they lack constraints that limit the scope of possible outputs. To improve the overall accuracy of outputs, define boundaries for models using filtering tools, maximum word lengths and clear probabilistic thresholds for the acceptability of outputs. These limits reduce the risk of hallucinations.

For example, when the model cannot assign a sufficient confidence level to a proposed recommendation about optimizing a production process, it should not provide that output to an engineer.

Comprehensive model testing

Inadequately tested models produce more hallucinations than comprehensively tested models.

Testing typically detects hallucinations by cross-referencing model-generated output with other trusted and authoritative sources.

It’s easy to recommend testing models rigorously before production use. It is vital to preventing or at least dramatically reducing the risk of hallucinations. However, software development teams are always under schedule pressure, and testing is the easiest task to shortchange because it occurs near the end of the project.

For example, project managers must assertively remind management of the costs and reputational risks of releasing inadequately tested models for production use.

Precision prompts

Ambiguity or lack of specificity in prompts can result in the model generating hallucinations or output that doesn’t align with the end-user’s intent. That result decreases confidence in the model or causes misinterpretation or misinformation.

Asking the right question is essential to achieve superior outputs from models. Accurate, relevant outputs depend on the clarity and specificity of engineers’ prompts. Precision prompts that reduce hallucinations exhibit these features:

  • Maximize clarity and specificity by writing prompts that are as short and specific as possible.
  • Provide context such as time, location or unique identifiers to narrow the scope.
  • Use descriptive language by specifying relevant characteristics such as profession, discipline, industry or geographic region.
  • Plan an iterative approach by refining successive prompts based on previous outputs.
  • Minimize the risk of biased outputs by ensuring fairness and inclusivity.

For example, write a specific prompt like “How is consistency achieved in stamping steel automotive wheels?” Avoid a general prompt like “How is quality achieved in manufacturing wheels?”

Fact-check outputs

Sometimes, hallucinations are not recognized and used by engineers in their work with dangerous or expensive consequences.

Engineers can reduce this hallucination risk by:

  • Fact-checking the output against other sources.
  • Asking the model to describe its reasoning and data sources.
  • Checking if the output is logically consistent and aligns with general world knowledge.
  • Writing a slightly different prompt to see if the model produces the same output.

For example, a prompt about a chemical additive should not produce output about a closely related but materially different chemical.

Human oversight

Once a model is in routine production use, it’s tempting for engineers to move on to address the next AI opportunity. However, not monitoring the performance of your AI application means you have no sense of the:

  • Number of hallucinations it’s producing.
  • Need to adjust or retrain the model as data ages and evolves.
  • Evolving end-user requirements that need to be addressed through model enhancements.

A better practice is to assign an analyst to regularly sample model outputs to validate their accuracy and relevance. Analysts can spot hallucinations that suggest model refinement is necessary.

For example, a model designed to support problem diagnosis for complex production machinery may occasionally provide an inaccurate investigation recommendation.

By implementing these best practices, engineers can significantly reduce model hallucinations and build confidence in the reliability of model outputs to advance their digital transformation.

The post How can engineers reduce AI model hallucinations – part 2 appeared first on Engineering.com.

]]>
3 Steps to AI Operational Excellence https://www.engineering.com/resources/3-steps-to-ai-operational-excellence/ Thu, 06 Mar 2025 15:54:00 +0000 https://www.engineering.com/?post_type=resources&p=137358 A guide to safer, more efficient asset operations.

The post 3 Steps to AI Operational Excellence appeared first on Engineering.com.

]]>
Unlock the secrets to safer, smarter asset operations with our free white paper, 3 Steps to AI Operational Excellence. Discover how cutting-edge AI streamlines operations and engineering workflows, boosts efficiency, and enhances safety in just three actionable steps. Download now and transform your operations with proven strategies!

Your download is sponsored by OpenText.

The post 3 Steps to AI Operational Excellence appeared first on Engineering.com.

]]>
Schneider announces new AI patent for process safety https://www.engineering.com/schneider-announces-new-ai-patent-for-process-safety/ Wed, 19 Feb 2025 16:47:52 +0000 https://www.engineering.com/?p=136881 The announcement is part of an initiative to answer a growing interest in combining AI and human ingenuity in functional safety analysis.

The post Schneider announces new AI patent for process safety appeared first on Engineering.com.

]]>
Schneider Electric has announced a new patent to leverage artificial intelligence (AI) to help reduce the likelihood process safety hazards.

The company says its new system automatically or semi-automatically analyzes potential process hazards and validates protection mechanisms in an industrial process. Users can then work to prevent hazards using an analysis tool to help identify protective mechanisms to the process.

This patent is a part of Schneider’s strategic initiative to enhance functional safety using AI. It is now possible to simulate hazards with varying conditions and then attempt to prevent dangerous conditions by using a process hazard analysis tool to generate protective actions.

As more industries embrace digital transformation and generate high-quality data, the advantages of implementing AI in day-to-day operations increases. This latest patent from Schneider’s EcoStruxure Triconex Safety team has the potential to identify hazards and safeguards in a process.

Process safety management can then take advantage of industrial, real-time data to revalidate hazard and operability (HAZOP) studies to prevent industrial hazards and save lives.

“We are the first to push this boundary of automating the hazard process analysis with artificial intelligence,” said Chris Stogner, Schneider Electric’s senior director of offer management. “Bringing AI to functional safety has the potential to create a more rigorous and robust HAZOP study, generating more combinations of scenarios and deviations then what was humanly possible before.”

Three other Schneider Electric patents incorporating AI into functional safety lifecycle are currently pending. The company is developing these initiatives to answer a growing interest in combining human ingenuity in functional safety analysis with strategic implementation of reenforced learning to prevent hazardous scenarios in industrial automation.

The post Schneider announces new AI patent for process safety appeared first on Engineering.com.

]]>
Onshape AI Advisor is coming soon—here’s everything we know https://www.engineering.com/onshape-ai-advisor-is-coming-soon-heres-everything-we-know/ Fri, 14 Feb 2025 16:38:40 +0000 https://www.engineering.com/?p=136765 Founder Jon Hirschtick explains that the upcoming AI chatbot is just phase one for PTC’s cloud CAD platform.

The post Onshape AI Advisor is coming soon—here’s everything we know appeared first on Engineering.com.

]]>
AI is coming to Onshape, PTC’s cloud CAD platform.

While some engineering software developers have been showing off AI research and making big AI promises, Onshape has been quietly plugging away on more conventional updates—like the brand new CAM Studio.

But behind the scenes, the Onshape team is as keen on AI as everyone else. In an interview with Onshape founder and PTC chief evangelist Jon Hirschtick, Engineering.com learned that Onshape has been testing several AI features and is nearly ready to release the first: AI Advisor.

“We’re very close,” Hirschtick said. “You can see the lights on the runway.”

Jon Hirschtick, chief evangelist at PTC, delivering a keynote on AI in product development at Design Conference 2024 in Croatia. (Image: Design Conference.)

Here’s what we know about Onshape AI Advisor, who will have access to it and what other AI features it may herald.

What is Onshape AI Advisor?

Onshape AI Advisor is a product support chatbot. It was announced in September 2024 as a detail in a PTC press release about a strategic collaboration agreement with cloud computing provider AWS. At the time, PTC expected to release AI Advisor by the end of 2024.

“While designing, users will be able to type a question in simple, conversational language and the Onshape AI Advisor will respond with an answer or recommendation based on the resource library and provide links to additional information,” read PTC’s announcement.

Related: Applying AI in manufacturing: Q&A with Jon Hirschtick.

AI Advisor is powered by Amazon Bedrock, AWS’s service for building generative AI applications. Bedrock offers access to a variety of foundation models from AI developers including Anthropic, Meta, Mistral AI and more.

In our interview, Hirschtick confirmed that AI Advisor is built on a commercial foundation model, but declined to name which. Regardless, he emphasized that the Onshape team has tuned it for their userbase, and that every output will cite sources and provide external links.

“We’re giving much better results than you get if you ask these same questions to ChatGPT, or Perplexity, or Copilot, or Claude, or DeepSeek,” Hirschtick said.

What kind of questions can you ask Onshape AI Advisor?

Hirschtick gave some examples of how users could interact with the new AI assistant.

“How would I create a curvature continuous boundary surface in Onshape?” one user might ask.

“What features would you recommend for modelling a remote control?” another may inquire.

These are questions a user could look up in the documentation, Hirschtick admits, but “so are half the things we ask each other.” Even experienced Onshape users may not know about all the features of the oft-updated software. AI Advisor is a way to help users discover and learn new ways to design in Onshape.

(Image: PTC.)

It may debut as a product support chatbot, but Hirschtick says that’s just phase one for AI Advisor. In the future, users will be able to ask tailored questions and get more practical output. Hirschtick gave a few more examples.

“Can you give me ideas on how to improve the performance of this model?” asks a user, who is then shown some possible solutions.

“Write a conditional operator in Onshape that says if the trailer width is less than 28 the value should be 4, if not, the value should be 5,” prompts another, and AI Advisor gives the expression in Onshape’s variable syntax.

“The first application will just be expert advice on how to use Onshape with cited sources,” Hirschtick summarized. “Future applications may involve generating expressions, maybe someday generating API calls. It could even someday modify your model, whether it’s with text-to-CAD or other[wise].”

AI Advisor for all (for now)

We couldn’t confirm the release date for AI Advisor, but we did learn which users will have access to Onshape’s upcoming AI feature.

First, the good news: Onshape AI Advisor will launch to all Onshape subscribers, including free and educational users. That wasn’t a given—the new Onshape CAM Studio, for instance, is only available to Onshape Professional and Enterprise subscribers, plus there’s an extension called CAM Studio Advanced that will cost extra for everyone.

The chatbot’s availability may change, however. Hirschtick speculated that as AI Advisor matures and expands, some of its capabilities may be segmented by subscription tier. Time will tell, but it will probably side with Hirschtick. Given the computing cost of generative AI and the business model of SaaS, it’d be surprising if PTC kept AI Advisor free forever.

Expect more AI from Onshape—someday

The soon-to-be-released AI Advisor is just the first step Onshape will take with AI. Hirschtick said the development team is actively exploring other AI features, including AI-based rendering and generative text-to-CAD.

Onshape users shouldn’t get too excited about these tools just yet. When it comes to AI, Onshape prefers patience to flash.

“We could ship tomorrow if we wanted something that’s a demo,” Hirschtick said. “The hard part is turning these into tools that pros value in pro-level use cases. And we’re working on it.”

The post Onshape AI Advisor is coming soon—here’s everything we know appeared first on Engineering.com.

]]>
PTC Launches ServiceMax AI Field Service Assistant https://www.engineering.com/ptc-launches-servicemax-ai-field-service-assistant/ Thu, 13 Feb 2025 09:00:00 +0000 https://www.engineering.com/?p=136685 Reschedules appointments, automates manual tasks, reviews asset history, and provides predictive maintenance guidance.

The post PTC Launches ServiceMax AI Field Service Assistant appeared first on Engineering.com.

]]>
BOSTON, MA, Feb 12, 2025 – PTC has announced the release of the ServiceMax AI field service management assistant powered by generative artificial intelligence (GenAI). ServiceMax AI leverages the full documented history of a field asset stored in the ServiceMax platform, including equipment data, service history, and known service resolutions, to help field service technicians get more done in less time. With the power of GenAI, technicians can use ServiceMax AI Chat to answer questions about a specific job or asset, automate manual documentation and scheduling tasks, and review proactive recommendations for predictive maintenance.

Image courtesy of PTC.

ServiceMax AI is based on decades of field service expertise and the latest GenAI technology, enabling service organizations to modernize their workflows and the technician experience.

For a more in-depth look at ServiceMax AI, including how it’s addressing workforce challenges, how it mimics natural human interaction, and its place in the age of agentic AI, please read this blog from Joseph June, general manager of ServiceMax, The Next Evolution in Field Service: AI-Powered ServiceMax is Solving the Technician Knowledge Challenge.

For more information, visit ptc.com.

The post PTC Launches ServiceMax AI Field Service Assistant appeared first on Engineering.com.

]]>
How can engineers reduce AI model hallucinations? https://www.engineering.com/how-can-engineers-reduce-ai-model-hallucinations/ Wed, 12 Feb 2025 16:00:00 +0000 https://www.engineering.com/?p=136468 The first of a two-part series discusses best practices to help engineers significantly reduce model hallucinations

The post How can engineers reduce AI model hallucinations? appeared first on Engineering.com.

]]>
Many engineers have adopted generative AI as part of their organization’s digital transformation. They like its tangible business benefits, the breadth of its applications, and its ease of implementation.

Offsetting all this considerable value, generative AI sometimes produces inaccurate, biased or nonsensical output that appears authentic. Such outputs are called hallucinations. The following types of hallucinations occur:

  • Output doesn’t match what is known to be accurate or true.
  • Output is not related to the end-user prompt.
  • Output is internally inconsistent or contains contradictions.

Hallucinations can significantly undermine end-user trust. They arise from various factors, including:

  • Patchy, insufficient or false training data. It results in the Large Language Model (LLM or model) fabricating information when it’s unsure of the correct answer.
  • Model lacks proper grounding and context to determine factual inaccuracies.
  • Excessive model complexity for the application.
  • Inadequate software testing.
  • Poorly crafted, imprecise or vague end-user prompts.

AI hallucinations can have significant consequences for real-world applications that erode engineers’ confidence. For example, an AI hallucination can:

  • Provide inaccurate values leading to an erroneous engineering load calculation and product failure.
  • Suggest stock trades leading to financial losses.
  • Incorrectly identify a benign skin lesion as malignant, leading to unnecessary medical interventions.
  • Contribute to the spread of misinformation.
  • Inappropriately deny credit or employment.

Organizations can mitigate the risk and frequency of these hallucinations occurring. That avoids embarrassing the company and misleading its customers by adopting multiple strategies, including:

  • Clear model goal.
  • Balanced training data.
  • Accurate training data.
  • Adversarial fortification.
  • Sufficient model tuning.
  • Limit responses.
  • Comprehensive model testing.
  • Precision prompts.
  • Fact-check outputs.
  • Human oversight.

Let’s explore the first five of these mitigations in more detail.

Clear model goal

Hallucinations occur if the model goal is too general, vague, or confusing. An unclear model goal will make the selection of appropriate training data ambiguous. That leads to an increase in the frequency of hallucinations.

A small amount of team collaboration can often clarify the model goal and reduce hallucinations.

For example, a model goal to verify machine performance is too general. A better goal might be to verify steel lathe or stamp performance.

Balanced training data

Hallucinations occur if the training data used to develop the model is insufficient, unbalanced or includes significant gaps. That leads to edge cases where the model attempts to respond to prompts with inadequate data. Overfitting is the term used to describe a model trained on a limited dataset that can’t make accurate predictions.

Train your model on diverse, representative data that covers a wide range of real-world examples for your application domain. Ensuring your training data is representative may require creating synthetic data. Use a data template to ensure all training data instances conform to a standard data structure. That improves the quality of training data and reduces hallucinations.

For example, in responding to a prompt about copper’s tensile stress, a model that contains primarily data about steel and aluminum performance characteristics will not have sufficient exposure to other metals.

Accurate training data

Many models are trained on data read from many public web pages. Many problems cause inaccurate web information that can lead to hallucinations, including:

  • Simple spelling and grammatical errors or misunderstandings.
  • Deliberately vague or erroneous information designed to mislead people.
  • Information that was correct in the past but has been superseded by updates or new research.
  • Humour, irony or parody that is easily misunderstood.
  • Contradictory information due to conflicting opinions or scientific theories.
  • Errors introduced by translating information from another language.

You can’t fact-check every web page you’ve used to build training data. However, you can fact-check a sample of web pages to estimate the risk of errors and related hallucinations in your training data.

For example, an engineer may prompt a model to verify a calculation. If the second output is different, the engineer may be able to identify inaccurate training data.

Adversarial fortification

Adversarial attacks consist of prompts intentionally or unintentionally designed to:

  • Launch a cyber attack to create financial loss, brand reputation damage, or intellectual property theft.
  • Mislead the model to produce hallucinations to compromise the reliability and trustworthiness of the model.

A model can become more resistant to adversarial attacks by:

  • Integrating adversarial examples into the training process to improve the classifier’s resistance to attack.
  • Introducing algorithms designed to identify and filter out adversarial examples.
  • Including adversarial examples in the scope of model testing.

For example, an engineer might unintentionally write a prompt that outputs a hallucination. The engineer should report the output to the team managing the model. The team should implement an enhancement that will reduce the likelihood of future hallucinations.

Sufficient model tuning

Hallucinations increase if a model is inadequately tuned.

Model tuning is a manual and semi-automated experimental process of finding the optimal values for hyperparameters to maximize model performance and reduce hallucinations. Hyperparameters are variables whose values the model cannot estimate from the training data.

For example, in responding to an engineer’s prompt about wind tunnel performance, a model that has not been inadequately tuned may return values that violate the laws of fluid behaviour.

By implementing these best practices, engineers can significantly reduce model hallucinations and become more confident in the reliability of model outputs.

The post How can engineers reduce AI model hallucinations? appeared first on Engineering.com.

]]>
Repsol taps Accenture to deploy AI agents https://www.engineering.com/repsol-taps-accenture-to-deploy-ai-agents/ Thu, 06 Feb 2025 15:24:23 +0000 https://www.engineering.com/?p=136458 The customized, autonomous AI agents will run on Nvidia's AI platform.

The post Repsol taps Accenture to deploy AI agents appeared first on Engineering.com.

]]>
Repsol’s A Coruña industrial complex in Galicia, Spain. (Image: Repsol)

Energy company Repsol has extended its co-innovation partnership with Dublin-based professional services firm Accenture to accelerate the use of generative AI across the company, through the introduction and deployment of AI agent systems. This “agentification” will help to improve the efficiency of processes as they are scaled across all company businesses.

Introducing AI agents is part of the evolution of Repsol’s digital program, an extension of the work carried out for more than two years in the energy firm’s Generative AI competence Center, which has laid the foundations for analyzing and understanding the advantages of generative AI and defined a strategy to extend it throughout the company.

“With the extension of our collaboration with Accenture, we continue to drive our digitalization and AI push through the introduction of generative AI agents,” said Josu Jon Imaz, CEO of Repsol. “We aspire to be one of the pioneering companies in the energy sector in the use of these technologies. Since we launched our Digital Program more than six years ago, Accenture has been providing us with tools to improve our efficiency and competitiveness, in our effort to transform the company through technology.”

The deal means Accenture will help build and deploy customized, autonomous AI agents, powered by components of the Accenture AI Refinery platform and the Nvidia AI platform, including Nvidia accelerated computing and Nvidia AI enterprise software.

In a press release, Repsol says these agents will help “reinvent and streamline processes into more dynamic and less complex workflows to boost productivity, ranging from planning and forecasting to application maintenance and incident resolution,” enabling Repsol employees to work faster, simpler and more efficiently.

The two companies will also explore the use of AI agents and Nvidia Omniverse for digital twins and robotic solutions to perform maintenance and other activities in its industrial and logistics centers more efficiently.

“We are excited to help Repsol achieve a new level of performance by working together to create tailored AI agents with the Accenture AI Refinery™ and the NVIDIA AI platform. Accelerating the use of agentic AI will enhance efficiency and productivity at speed, better serve customers with personalized experiences, and ultimately help Repsol gain competitive advantage,” said Julie Sweet, chair and CEO, Accenture.

On the customer side, these technologies will deliver personalized offers with greater accuracy and speed.

As part of this agreement, Repsol will also expand training for its employees.

The post Repsol taps Accenture to deploy AI agents appeared first on Engineering.com.

]]>