WhatsappWhatsapp +34639173632 (L-V de 9 a14h)

 Mi cuenta

How to choose the best MEMS sensor for a wireless CbM system



Learn in this paper how to choose the best MEMS sensor by considering different factors such as noise, bandwidth, g range, power and turn-on time. The article will also introduce the Voyager platform, a robust vibration monitoring platform enabling designers to rapidly deploy a wireless solution to a machine or test setup.

Key Learning Points:

  • What are the differences between competing wireless sensor networks in the CbM market?
  • Why are MEMS sensors replacing piezoelectric vibration sensors?
  • How to choose the best MEMS sensor for wireless CbM applications in harsh RF environments?

Download free whitepaper

 

Leer más



Learn in this paper how to choose the best MEMS sensor by considering different factors such as noise, bandwidth, g range, power and turn-on time. The article will also introduce the Voyager platform, a robust vibration monitoring platform enabling designers to rapidly deploy a wireless solution to a machine or test setup.

Key Learning Points:

  • What are the differences between competing wireless sensor networks in the CbM market?
  • Why are MEMS sensors replacing piezoelectric vibration sensors?
  • How to choose the best MEMS sensor for wireless CbM applications in harsh RF environments?

Download free whitepaper

 


Leer más

Publicado el 20/07/2022 en industry 4.0, future of manufacturing, automation, white papers, sponsored

As industry goes all-in on automation, what happens next?



While the world put itself on hold for much of the past two years, I found myself busier than ever – collaborating with various global companies to urgently accelerate their automation strategies. Now, with plenty of interested parties keen to play catch up, I’m often asked for my take on the state of the nation in automation. My answer? It’s exciting times, but there are plenty more challenges to face.

Let me explain. In recent years when we talked about automation in industry, the conversations were predominantly focused on manufacturing, specifically in high-volume lines in the automotive industry. That world is history. Industry is all-in on automation as businesses scramble to realise new use cases from retail, logistics, wholesale and warehousing to pharma, medical devices and the broader life sciences arena.

The latter is a particularly interesting one. Automation in areas such as surgical robotics has been a big focus for a while of course, but now attention is shifting towards the exciting intersection of biology and engineering. AI-powered automation solutions can bring heightened levels of control to the lab and propel manufacturing at scale for life-changing medicines. The pathway to the mainstream market for radical new cell and gene therapies can be smoothed and scaled by the creation of rigorously controlled manufacturing environments.

The potency of this trend towards cross-industry adoption - this democratisation of automation - was really brought home to me during my recent gig as a panellist at the Automate 2022 show in Detroit. We are emerging from global pandemic restrictions and lockdowns with all the drivers for change in place: maturing technology capabilities, sky-high consumer demand, and an unprecedented global labour shortage. “We’re ready for what’s next!” was the shared sentiment in the conference, which had an unprecedented high attendance and strong representation across sectors and automation supply chain.

That begs the big question of course – where do we go from here? As with all innovation curves, we’re faced with new opportunities and fresh challenges. The big opportunity right now lies in the consolidation of technologies and platforms that are concerned with automation. Look at it this way: if you’re a technology developer, you’ve now got the chance to apply your systems thinking approach in a way that cuts across industry sectors. Those who seize the moment can own the future.

The challenging side of the coin comes with balancing the platform approach – incorporating common elements such as AI systems engineering, sensing and image processing – with complex, sector-specific needs. This is a tricky problem to navigate. Essentially, the task is to advance your platform development in a way that allows efficient scaling and customisation of the end-user application simultaneously.

It goes without saying that a manufacturing assembly line, for example, is vastly different from a logistics environment. For a start, the former did not need the level of human interaction, nor the flexibility and task complexity, demanded by the latter. In fact, there’s a whole host of specific checks and balances to consider across all sector applications, including the cost of technology deployment against the available profit margin. That’s where various business models that allow for flexible cashflow management (for example, the use of CAPEX/OPEX strategies) become key to meeting business needs.

The RaaS (robots as a service) option certainly has its benefits, depending on the use case. It’s particularly suitable for applications such as standalone autonomous vehicles or autonomous mobile robots, which are easier to plug into an existing operation. RaaS adoption also means users are always benefitting from the latest technology, which is maintained and kept up to date by the technology supplier.

If you’re going to put billions of dollars into building your own technology, you’re investing in something that’s going to take at least five years to deploy at scale, which means you miss out on more than five years of technology development. Nevertheless, many of the global logistics giants will continue to opt for the CAPEX route as automation in their environment requires more existing infrastructure customisation and so may not be suitable for the XaaS model. Here, the strategy must be to design robust, mission-critical applications based on a systems-thinking approach that pays heed to crucial factors such as convergence and interoperability.

This is an exciting period for automation. Innovative tools such as digital twins and simulated environments are ready to de-risk projects and slash the costs of real-world machine learning development. Meanwhile, a wave of start-ups is arriving on the scene offering potential partnerships, differentiating technology and a springboard to the future. Watch this space, we’re in for quite a ride.

Oli Qirko is president, North America, at Cambridge Consultants, part of Capgemini Invent.

Leer más



While the world put itself on hold for much of the past two years, I found myself busier than ever – collaborating with various global companies to urgently accelerate their automation strategies. Now, with plenty of interested parties keen to play catch up, I’m often asked for my take on the state of the nation in automation. My answer? It’s exciting times, but there are plenty more challenges to face.

Let me explain. In recent years when we talked about automation in industry, the conversations were predominantly focused on manufacturing, specifically in high-volume lines in the automotive industry. That world is history. Industry is all-in on automation as businesses scramble to realise new use cases from retail, logistics, wholesale and warehousing to pharma, medical devices and the broader life sciences arena.

The latter is a particularly interesting one. Automation in areas such as surgical robotics has been a big focus for a while of course, but now attention is shifting towards the exciting intersection of biology and engineering. AI-powered automation solutions can bring heightened levels of control to the lab and propel manufacturing at scale for life-changing medicines. The pathway to the mainstream market for radical new cell and gene therapies can be smoothed and scaled by the creation of rigorously controlled manufacturing environments.

The potency of this trend towards cross-industry adoption - this democratisation of automation - was really brought home to me during my recent gig as a panellist at the Automate 2022 show in Detroit. We are emerging from global pandemic restrictions and lockdowns with all the drivers for change in place: maturing technology capabilities, sky-high consumer demand, and an unprecedented global labour shortage. “We’re ready for what’s next!” was the shared sentiment in the conference, which had an unprecedented high attendance and strong representation across sectors and automation supply chain.

That begs the big question of course – where do we go from here? As with all innovation curves, we’re faced with new opportunities and fresh challenges. The big opportunity right now lies in the consolidation of technologies and platforms that are concerned with automation. Look at it this way: if you’re a technology developer, you’ve now got the chance to apply your systems thinking approach in a way that cuts across industry sectors. Those who seize the moment can own the future.

The challenging side of the coin comes with balancing the platform approach – incorporating common elements such as AI systems engineering, sensing and image processing – with complex, sector-specific needs. This is a tricky problem to navigate. Essentially, the task is to advance your platform development in a way that allows efficient scaling and customisation of the end-user application simultaneously.

It goes without saying that a manufacturing assembly line, for example, is vastly different from a logistics environment. For a start, the former did not need the level of human interaction, nor the flexibility and task complexity, demanded by the latter. In fact, there’s a whole host of specific checks and balances to consider across all sector applications, including the cost of technology deployment against the available profit margin. That’s where various business models that allow for flexible cashflow management (for example, the use of CAPEX/OPEX strategies) become key to meeting business needs.

The RaaS (robots as a service) option certainly has its benefits, depending on the use case. It’s particularly suitable for applications such as standalone autonomous vehicles or autonomous mobile robots, which are easier to plug into an existing operation. RaaS adoption also means users are always benefitting from the latest technology, which is maintained and kept up to date by the technology supplier.

If you’re going to put billions of dollars into building your own technology, you’re investing in something that’s going to take at least five years to deploy at scale, which means you miss out on more than five years of technology development. Nevertheless, many of the global logistics giants will continue to opt for the CAPEX route as automation in their environment requires more existing infrastructure customisation and so may not be suitable for the XaaS model. Here, the strategy must be to design robust, mission-critical applications based on a systems-thinking approach that pays heed to crucial factors such as convergence and interoperability.

This is an exciting period for automation. Innovative tools such as digital twins and simulated environments are ready to de-risk projects and slash the costs of real-world machine learning development. Meanwhile, a wave of start-ups is arriving on the scene offering potential partnerships, differentiating technology and a springboard to the future. Watch this space, we’re in for quite a ride.

Oli Qirko is president, North America, at Cambridge Consultants, part of Capgemini Invent.


Leer más

Publicado el 11/07/2022 en comment, automation, industry 4.0, future of manufacturing

Siemens and Nvidia aim to give digital twins more realism



On the face of it, the decision by Siemens and Nvidia to forge a link between their tools seems simple enough. So simple that executives from both companies were at pains to point out that it really isn’t that simple.

The core of the agreement, presented at a joint event on Wednesday, will be to ensure Siemens tools that are used to design everything from chips to factories will ultimately be able to feed data to Nvidia’s virtual-world building software Omniverse to create more photorealistic visualisations. 

“The digital twin is physics-based. It doesn’t just look like the real thing, it behaves like the real thing,” claimed Siemens CEO Roland Busch. “This is not about animation but simulation. If you don’t mimic the real world accurately, you don’t get the benefit out of it. The digital twin has to be comprehensive. If you simulate something on a digital twin and you figure out you want to change something in the real world, you better get it right.”

Jensen Huang, Nvidia co-founder and CEO, pointed to the need for the digital twin and the physical systems it represents to match: “You need to believe they are the same. That’s why it’s so profoundly different to a video game.”

Busch used the example of robots in a factory to describe how their tie-up is meant to work. “Imagine your factory in China is slowing down: it produces fewer parts every day – nothing bad but it’s all adding up. The team at the factory has no idea why it’s happening.”

He described bringing a range of engineers at various locations, including the manufacturer of the robots on the production line, into a VR environment. “They immerse themselves in the digital twin of the plant. It mirrors exactly what is happening in the real plant and it is not a still photograph. It is in real time, down to the physical behaviour of the robots. The team travels back in time in the metaverse to when the output was strong to see what has changed since then. They realise that one robot on the feeder line has missed its latest software update and it’s out of sync. The team updates the software in the digital twin and the robot immediately speeds up and works in sync. Now the team is confident enough to update the software on the real robot.”

In this synthetic example, there’s one nagging question. Would a problem like this actually need a VR environment to determine the issue? And if not, why is it a good example of how this tie-up might work?

It’s generally very easy to make the argument that a VR-type environment makes things easier to debug because they make things look more realistic. But the issue with VR is that visually realistic is not the same as tangible – interacting with the system remains tough because despite all the money that’s been poured into headset development the only sense that has received a lot of attention in the metaverse is sight. In this case, being able to manipulate the robots is not all that important: you wouldn’t want to arm-wrestle an industrial robot under most circumstances. What matters in this case is being able to make sense of what the underlying data are telling you about the system. There is a chance someone might look at how the robots are moving and decide “you know, that’s the old software at work”. But it might just be a case that you could look at a more abstract simulation of the production flow and see that one robot just keeps missing deadlines or waiting ages to get started and then backtrack to why that might be the case.

In reality, such visualisations may simply work better as sales tools. Tony Hemmelgarn, CEO of Siemens Digital Industries Software, used the example of a yacht builder. “With our joint solution, we can enable our customer to do a walkthrough of this yacht on a photorealistic version of the digital twin…what this means for our customers is a quicker approval for them when they go to their customers and talk about what they are trying design for them.”

It need not just be the cosmetics. Dirk Didascalou, CTO of Siemens Digital Industries Software, pointed to the example of machine-tool maker Heller. Siemens has already worked with Heller on projects to optimise the flow of parts to robots based on where the tools needed to work on them sit in their magazines. Heller engineers had found long waits can develop if the tools are not in the best order because of the time it takes to transfer them from different positions in the magazine. In the setup they devised, the machine tool calls on a computer sitting nearby to compute the best ordering based on which parts are ready to work on.

According to Didascalou, Heller is taking a further step in using software to change how its machine tools behave with what he called a “hardware as a service” strategy. “Now they are going even further with a new business model where a customers can buy or lease a machine at a much lower price that has the basic functionality and then when they need it or if they just want to try it, they can upgrade to the premium features at the press of a button,” Didascolou explained.

Higher-fidelity digital twins will likely help this kind of process by making it possible to build a model of the client’s factory into a computer model and use that to evaluate how different upgrades and changes will affect throughput. For factory operators taking more direct control of their systems, the digital twin provides the ability to test out software updates and changes to see if they will cause problems down the line, though there is the question of how much photorealism this requires. One could conceive of a scenario where the modelling is of such fidelity that simulations of different temperatures and curing times for coatings might yield clear visual differences in the virtual realm, but how much work would it take to model the physics of all those processes and render them accurately compared to more targeted experiments that focus on the core data and more abstract visualisations?

The argument as to how important photorealistic rendering is to analysis is one that has run for years in medicine, particularly radiology, where experiments have shown that – possibly because of the way people are trained – sticking with 2D representations of tissues leads to better diagnoses than fancier 3D models. In forensics, however, experiments point to the 3D models being more important in explaining outcomes to non-experts rather than the experts, who find it more efficient to stick to visualisations that show them what they are looking for. Much like the yacht’s carpets in Hemmelgarn’s example, photorealism is good for getting input from others but might not be all that great for the core task.

However, the deal is not all eye candy. There are two underlying themes to the Siemens-Nvidia tie-up that are less obvious but potentially more important. One is the stated desire by both companies to have their software tools talk to each more efficiently, as well as for the industry to being to coalesce on standards around how you bring data from design tools and abstract simulations into VR simulations. Nvidia is keen on the USD interface originally devised by Pixar for automating the physics in its animations. Huang regards USD as potentially being the HTML of the metaverse. The two CEOs also stressed the need for communication to go all the way from physical to metaverse to design tools so that when someone records a change in one it is automatically reflected in the other, rather than relying on engineers making manual changes and run the risk of the models diverging.

“The digital twin will run forever concurrently with the plant. As it runs concurrently, it can predict properly what is happening physically and then in the future when you want to update it, you have confidence that the virtual model is consistent with the physical model,” said Huang.

How consistent is another matter. There is potentially a crisis of bureaucracy here. BMW board member Milan Nedeljković pointed out: “The digital twin is not the challenge, the challenge is to link the digital twin into the existing systems.” 

When Busch used the example of the lost software update in the Chinese factory, I thought a more likely scenario would be a subtle change in practices, possibly something as simple as a new worker accidentally leaving a trolley in the wrong place and forcing an automated guided vehicle to find a new route. Are factories going to be so closely monitored that this would be reflected in the virtual world? With cameras everywhere, it’s possible, but that introduces new questions not just about the legality of workplace surveillance but its influence on morale.

A second outcome is likely to be a lot more benign. One big advantage of highly realistic virtual worlds is that it makes the training of AI a lot more effective. In the field of automated driving, scenarios based on synthetic data rendered in virtual worlds are already being used to provide neural networks with more data than can be obtained even in millions of miles of real-world driving. Huang pointed to factory automation as another situation where synthetic data can improve the training of robots expected to work closely with humans. 

Siemens has its own PAVE360 environment, which models the electronics and sensors of vehicles but is not yet a fully immersive environment. With the Omniverse tie-up, that is a possibility for the future. Similarly, virtual-world training should make it easier to train industrial robots to work more closely alongside humans in a safe way rather than rely on safety cages. The virtual world makes it possible to design scenarios you simply cannot test safely in the physical world to ensure the robots avoid them.

Though it’s a deal that highlights surface changes, there is potentially a lot behind it if the two companies are able to achieve what they hope. If not, it will be yet another example of VR hope not getting remotely close to reality.

Leer más



On the face of it, the decision by Siemens and Nvidia to forge a link between their tools seems simple enough. So simple that executives from both companies were at pains to point out that it really isn’t that simple.

The core of the agreement, presented at a joint event on Wednesday, will be to ensure Siemens tools that are used to design everything from chips to factories will ultimately be able to feed data to Nvidia’s virtual-world building software Omniverse to create more photorealistic visualisations. 

“The digital twin is physics-based. It doesn’t just look like the real thing, it behaves like the real thing,” claimed Siemens CEO Roland Busch. “This is not about animation but simulation. If you don’t mimic the real world accurately, you don’t get the benefit out of it. The digital twin has to be comprehensive. If you simulate something on a digital twin and you figure out you want to change something in the real world, you better get it right.”

Jensen Huang, Nvidia co-founder and CEO, pointed to the need for the digital twin and the physical systems it represents to match: “You need to believe they are the same. That’s why it’s so profoundly different to a video game.”

Busch used the example of robots in a factory to describe how their tie-up is meant to work. “Imagine your factory in China is slowing down: it produces fewer parts every day – nothing bad but it’s all adding up. The team at the factory has no idea why it’s happening.”

He described bringing a range of engineers at various locations, including the manufacturer of the robots on the production line, into a VR environment. “They immerse themselves in the digital twin of the plant. It mirrors exactly what is happening in the real plant and it is not a still photograph. It is in real time, down to the physical behaviour of the robots. The team travels back in time in the metaverse to when the output was strong to see what has changed since then. They realise that one robot on the feeder line has missed its latest software update and it’s out of sync. The team updates the software in the digital twin and the robot immediately speeds up and works in sync. Now the team is confident enough to update the software on the real robot.”

In this synthetic example, there’s one nagging question. Would a problem like this actually need a VR environment to determine the issue? And if not, why is it a good example of how this tie-up might work?

It’s generally very easy to make the argument that a VR-type environment makes things easier to debug because they make things look more realistic. But the issue with VR is that visually realistic is not the same as tangible – interacting with the system remains tough because despite all the money that’s been poured into headset development the only sense that has received a lot of attention in the metaverse is sight. In this case, being able to manipulate the robots is not all that important: you wouldn’t want to arm-wrestle an industrial robot under most circumstances. What matters in this case is being able to make sense of what the underlying data are telling you about the system. There is a chance someone might look at how the robots are moving and decide “you know, that’s the old software at work”. But it might just be a case that you could look at a more abstract simulation of the production flow and see that one robot just keeps missing deadlines or waiting ages to get started and then backtrack to why that might be the case.

In reality, such visualisations may simply work better as sales tools. Tony Hemmelgarn, CEO of Siemens Digital Industries Software, used the example of a yacht builder. “With our joint solution, we can enable our customer to do a walkthrough of this yacht on a photorealistic version of the digital twin…what this means for our customers is a quicker approval for them when they go to their customers and talk about what they are trying design for them.”

It need not just be the cosmetics. Dirk Didascalou, CTO of Siemens Digital Industries Software, pointed to the example of machine-tool maker Heller. Siemens has already worked with Heller on projects to optimise the flow of parts to robots based on where the tools needed to work on them sit in their magazines. Heller engineers had found long waits can develop if the tools are not in the best order because of the time it takes to transfer them from different positions in the magazine. In the setup they devised, the machine tool calls on a computer sitting nearby to compute the best ordering based on which parts are ready to work on.

According to Didascalou, Heller is taking a further step in using software to change how its machine tools behave with what he called a “hardware as a service” strategy. “Now they are going even further with a new business model where a customers can buy or lease a machine at a much lower price that has the basic functionality and then when they need it or if they just want to try it, they can upgrade to the premium features at the press of a button,” Didascolou explained.

Higher-fidelity digital twins will likely help this kind of process by making it possible to build a model of the client’s factory into a computer model and use that to evaluate how different upgrades and changes will affect throughput. For factory operators taking more direct control of their systems, the digital twin provides the ability to test out software updates and changes to see if they will cause problems down the line, though there is the question of how much photorealism this requires. One could conceive of a scenario where the modelling is of such fidelity that simulations of different temperatures and curing times for coatings might yield clear visual differences in the virtual realm, but how much work would it take to model the physics of all those processes and render them accurately compared to more targeted experiments that focus on the core data and more abstract visualisations?

The argument as to how important photorealistic rendering is to analysis is one that has run for years in medicine, particularly radiology, where experiments have shown that – possibly because of the way people are trained – sticking with 2D representations of tissues leads to better diagnoses than fancier 3D models. In forensics, however, experiments point to the 3D models being more important in explaining outcomes to non-experts rather than the experts, who find it more efficient to stick to visualisations that show them what they are looking for. Much like the yacht’s carpets in Hemmelgarn’s example, photorealism is good for getting input from others but might not be all that great for the core task.

However, the deal is not all eye candy. There are two underlying themes to the Siemens-Nvidia tie-up that are less obvious but potentially more important. One is the stated desire by both companies to have their software tools talk to each more efficiently, as well as for the industry to being to coalesce on standards around how you bring data from design tools and abstract simulations into VR simulations. Nvidia is keen on the USD interface originally devised by Pixar for automating the physics in its animations. Huang regards USD as potentially being the HTML of the metaverse. The two CEOs also stressed the need for communication to go all the way from physical to metaverse to design tools so that when someone records a change in one it is automatically reflected in the other, rather than relying on engineers making manual changes and run the risk of the models diverging.

“The digital twin will run forever concurrently with the plant. As it runs concurrently, it can predict properly what is happening physically and then in the future when you want to update it, you have confidence that the virtual model is consistent with the physical model,” said Huang.

How consistent is another matter. There is potentially a crisis of bureaucracy here. BMW board member Milan Nedeljković pointed out: “The digital twin is not the challenge, the challenge is to link the digital twin into the existing systems.” 

When Busch used the example of the lost software update in the Chinese factory, I thought a more likely scenario would be a subtle change in practices, possibly something as simple as a new worker accidentally leaving a trolley in the wrong place and forcing an automated guided vehicle to find a new route. Are factories going to be so closely monitored that this would be reflected in the virtual world? With cameras everywhere, it’s possible, but that introduces new questions not just about the legality of workplace surveillance but its influence on morale.

A second outcome is likely to be a lot more benign. One big advantage of highly realistic virtual worlds is that it makes the training of AI a lot more effective. In the field of automated driving, scenarios based on synthetic data rendered in virtual worlds are already being used to provide neural networks with more data than can be obtained even in millions of miles of real-world driving. Huang pointed to factory automation as another situation where synthetic data can improve the training of robots expected to work closely with humans. 

Siemens has its own PAVE360 environment, which models the electronics and sensors of vehicles but is not yet a fully immersive environment. With the Omniverse tie-up, that is a possibility for the future. Similarly, virtual-world training should make it easier to train industrial robots to work more closely alongside humans in a safe way rather than rely on safety cages. The virtual world makes it possible to design scenarios you simply cannot test safely in the physical world to ensure the robots avoid them.

Though it’s a deal that highlights surface changes, there is potentially a lot behind it if the two companies are able to achieve what they hope. If not, it will be yet another example of VR hope not getting remotely close to reality.


Leer más

Publicado el 30/06/2022 en virtual reality, electronic connections, virtual reality, simulation, industry 4.0, artificial intelligence

Qualcomm unveils new chips with next-gen Wi-Fi and Bluetooth support



The new chips will include Qualcomm's 'FastConnect 7800' platform which will allow them to support the upcoming Wi-Fi 7 standard, which isn’t even expected to be formalised until 2024 at the earliest.

The firm said the chips would be the “world’s first” with Wi-Fi 7 compatibility which will enable new performance benchmarks with peak speeds of 5.8Gbps and sub-2 millisecond latency. It also includes support for 'High Band Simultaneous' technology, which is capable of utilising both 5 and 6GHz spectrum bands concurrently in order to keep latency to an absolute minimum.

For consumers, the low latency features may be most keenly utilised by online gamers, where even split-second delays can make the difference in victories. The technology is also useful for firms leveraging Industry 4.0, which sees manufacturers integrating new technologies, including IoT, cloud computing and analytics.

Wi-Fi 7 comes only a few years after the finalisation of Wi-Fi 6 which is still yet to be supported by many consumer devices. In comparison to the previous generation, Wi-Fi 6 allowed more devices to connect at one time and enhanced data handling to ensure that heavy usage from one user will not negatively impact others on the network. The specification is particularly useful in crowded public areas, such as at sporting events or busy commuter hubs.

The FastConnect 7800 platform succeeds the 6900, which has been included in most Android flagship devices since early 2021.

Qualcomm’s new chips also support a 'Dual Bluetooth' system that should enable power savings for smartphones using the wireless standard as well as potentially doubling the range. It will also enable higher bandwidth music streams up to lossless CD quality audio. As the most popular option for wireless audio streaming, Bluetooth has taken flak in the past for its requirement that music streams be compressed. It is also not considered ideal for gamers due to some inherent latency between devices. Qualcomm said its new Bluetooth technologies will help to reduce latency by up to 25 per cent.

“With FastConnect 7800, Qualcomm Technologies reasserts its leadership by defining the future of wireless connectivity,” said Dino Bekis, Qualcomm vice president.

“Coupled with up to 50 per cent lower power consumption and Intelligent Dual Bluetooth with advanced Snapdragon Sound capabilities, FastConnect 7800 is simply the best client connectivity offering in the industry.”

Hot on the heels of Qualcomm’s new announcements at MWC, Taiwanese rival Mediatek announced its own suite of midrange Dimensity chips that are designed to compete with Qualcomm’s top-of-the-line Snapdragon 888 chips from 2021.

While Qualcomm has long been the dominant chipmaker for Android flagship devices, Mediatek has been improving its competitiveness and is now just slightly behind Qualcomm when it comes to benchmarking statistics.

Leer más



The new chips will include Qualcomm's 'FastConnect 7800' platform which will allow them to support the upcoming Wi-Fi 7 standard, which isn’t even expected to be formalised until 2024 at the earliest.

The firm said the chips would be the “world’s first” with Wi-Fi 7 compatibility which will enable new performance benchmarks with peak speeds of 5.8Gbps and sub-2 millisecond latency. It also includes support for 'High Band Simultaneous' technology, which is capable of utilising both 5 and 6GHz spectrum bands concurrently in order to keep latency to an absolute minimum.

For consumers, the low latency features may be most keenly utilised by online gamers, where even split-second delays can make the difference in victories. The technology is also useful for firms leveraging Industry 4.0, which sees manufacturers integrating new technologies, including IoT, cloud computing and analytics.

Wi-Fi 7 comes only a few years after the finalisation of Wi-Fi 6 which is still yet to be supported by many consumer devices. In comparison to the previous generation, Wi-Fi 6 allowed more devices to connect at one time and enhanced data handling to ensure that heavy usage from one user will not negatively impact others on the network. The specification is particularly useful in crowded public areas, such as at sporting events or busy commuter hubs.

The FastConnect 7800 platform succeeds the 6900, which has been included in most Android flagship devices since early 2021.

Qualcomm’s new chips also support a 'Dual Bluetooth' system that should enable power savings for smartphones using the wireless standard as well as potentially doubling the range. It will also enable higher bandwidth music streams up to lossless CD quality audio. As the most popular option for wireless audio streaming, Bluetooth has taken flak in the past for its requirement that music streams be compressed. It is also not considered ideal for gamers due to some inherent latency between devices. Qualcomm said its new Bluetooth technologies will help to reduce latency by up to 25 per cent.

“With FastConnect 7800, Qualcomm Technologies reasserts its leadership by defining the future of wireless connectivity,” said Dino Bekis, Qualcomm vice president.

“Coupled with up to 50 per cent lower power consumption and Intelligent Dual Bluetooth with advanced Snapdragon Sound capabilities, FastConnect 7800 is simply the best client connectivity offering in the industry.”

Hot on the heels of Qualcomm’s new announcements at MWC, Taiwanese rival Mediatek announced its own suite of midrange Dimensity chips that are designed to compete with Qualcomm’s top-of-the-line Snapdragon 888 chips from 2021.

While Qualcomm has long been the dominant chipmaker for Android flagship devices, Mediatek has been improving its competitiveness and is now just slightly behind Qualcomm when it comes to benchmarking statistics.


Leer más

Publicado el 01/03/2022 en chipmakers, silicon chips, consumer technology, industry 4.0, gadgets, electronics

British sugar installs private 4G network to automate its factories



The custom network, which was built by Virgin Media O2, will be used by British Sugar to implement next-generation manufacturing techniques at all four of its sites, spanning three counties.

The network will connect multiple IoT devices, allowing for a modernisation of the production process such as the ability to use AI systems; automated production lines; robotics, and drones. The firm believes the network will help it to increase productivity, boost efficiency and even improve health and safety on site.

Following a multi-million-pound investment, British Sugar will create four ‘factory of the future’ sites, automating the manufacturing process for sugar and other co-products.

Part of this will be relying on AI to monitor operations in real time and predict maintenance and potential downtime in advance. This reduces disruption, cuts down on wastage and can deliver cost and energy savings that help to avoid unnecessary emissions.

The firm added that the 4G private network helps to improve security and control and enables high-bandwidth connectivity in a complex factory setting where introducing Wi-Fi is challenging due to a highly metallic factory environment with a requirement for both indoor and outdoor coverage.

The private network switch-on is part of a major smart upgrade to British Sugar’s factories, delivered through a seven-year partnership with Virgin Media O2 Business which has brought in Nokia as a strategic partner on the project.

The new network has also been designed to be easily upgradable to 5G where necessary, as British Sugar looks to introduce more complex processes that will benefit from the higher speeds and lower latency of 5G.

This could include robotics to streamline production even further; automated (driverless) ground vehicles, and connected drones that can cover a large area and can monitor tall structures such as silos and lime kilns remotely and safely.

Paul Hitchcock, head of factory organisation at British Sugar, said: “This work will help us progress in our ‘Factories of the Future’ project, using the latest technology to ensure that our sites are operating as efficiently as possible.”

Jo Bertram, managing director at Virgin Media O2 Business, said: “Announcing the switch-on of the first multi-site private mobile network is a huge milestone for us at Virgin Media O2 Business, but it’s also a significant step for British manufacturing as a whole, taking us that bit closer to Industry 4.0 and all the benefits this offers. Private networks like these are a big part of building the connected factories of the future, so British manufacturing can keep pace with the rest of the world.”

The 4G private network is now operational at British Sugar’s Wissington site in Norfolk, where the firm said that benefits are already being realised. The new private network will also provide connectivity across British Sugar’s other factories: Cantley in Norfolk, Bury St Edmunds in Suffolk, and Newark in Nottinghamshire.

Leer más



The custom network, which was built by Virgin Media O2, will be used by British Sugar to implement next-generation manufacturing techniques at all four of its sites, spanning three counties.

The network will connect multiple IoT devices, allowing for a modernisation of the production process such as the ability to use AI systems; automated production lines; robotics, and drones. The firm believes the network will help it to increase productivity, boost efficiency and even improve health and safety on site.

Following a multi-million-pound investment, British Sugar will create four ‘factory of the future’ sites, automating the manufacturing process for sugar and other co-products.

Part of this will be relying on AI to monitor operations in real time and predict maintenance and potential downtime in advance. This reduces disruption, cuts down on wastage and can deliver cost and energy savings that help to avoid unnecessary emissions.

The firm added that the 4G private network helps to improve security and control and enables high-bandwidth connectivity in a complex factory setting where introducing Wi-Fi is challenging due to a highly metallic factory environment with a requirement for both indoor and outdoor coverage.

The private network switch-on is part of a major smart upgrade to British Sugar’s factories, delivered through a seven-year partnership with Virgin Media O2 Business which has brought in Nokia as a strategic partner on the project.

The new network has also been designed to be easily upgradable to 5G where necessary, as British Sugar looks to introduce more complex processes that will benefit from the higher speeds and lower latency of 5G.

This could include robotics to streamline production even further; automated (driverless) ground vehicles, and connected drones that can cover a large area and can monitor tall structures such as silos and lime kilns remotely and safely.

Paul Hitchcock, head of factory organisation at British Sugar, said: “This work will help us progress in our ‘Factories of the Future’ project, using the latest technology to ensure that our sites are operating as efficiently as possible.”

Jo Bertram, managing director at Virgin Media O2 Business, said: “Announcing the switch-on of the first multi-site private mobile network is a huge milestone for us at Virgin Media O2 Business, but it’s also a significant step for British manufacturing as a whole, taking us that bit closer to Industry 4.0 and all the benefits this offers. Private networks like these are a big part of building the connected factories of the future, so British manufacturing can keep pace with the rest of the world.”

The 4G private network is now operational at British Sugar’s Wissington site in Norfolk, where the firm said that benefits are already being realised. The new private network will also provide connectivity across British Sugar’s other factories: Cantley in Norfolk, Bury St Edmunds in Suffolk, and Newark in Nottinghamshire.


Leer más

Publicado el 25/01/2022 en manufacturing, future of manufacturing, 4g, 5g, mobile networks, industry 4.0, automation, telecommunications

View from India: Of unicorns and EVs



Home to 90 Unicorns, India is the world’s third-largest Unicorn hub. In finance terms, Unicorn refers to startup companies valued at over $1bn. As per the Hurun Global Unicorn Index for 2021, the US tops the list, followed by China, and then India, whose startup ecosystem is represented by approximately 60,000 startups. 

When we look at the startup community, most of the new ones are in the fintech and e-commerce space. That’s understandable as online retail and payment gateways are preferred modes of purchase ever since the first wave of the pandemic. Moreover, the data emerging from online purchases reveal consumer preferences, and so the data becomes an asset for retailers to tailor their offerings to customers accordingly. However, as Covid has spurred the need for online learning and teaching, ed-tech startups too are on the rise; these startups have created innovative solutions and large-scale employment.

In the recent Startup India Innovation Week, Prime Minister (PM) Narendra Modi revealed that the annual funding in the Indian startup space has registered a phenomenal growth. It has more than trebled, growing from around $11bn to $36bn within a year. Throwing light on the spread, the PM has said that there is at least one startup in every state spread over more than 625 districts. Nearly half the startups are operating from Tier 2 and Tier 3 cities. It means that several young people from these regions are able to sell their ideas, become startups and employ like-minded people. It would be great if some startups worked in the rural space for the last-mile-delivery consumer. It’s not just the essentials that could be a market here, as the aspiration level of the rural segment is growing; it’s a rainbow of expectations and may include consumer durables, fast food and devices.

What is interesting is that domestic and international investors have shown an inkling towards the drone sector, in sync with the Drone Policy. Drone companies have received orders worth about 5bn rupees ($67m, or £49m) from the Army, Navy and Air Force, while the government has leveraged large-scale usage of drones for mapping village properties for the SVAMITVA (Survey of Villages and Mapping with Improvised Technology in Village Areas) Scheme. Notwithstanding that, the skies have opened up opportunities for drone startups, many of which could tap agriculture and services like home delivery of medicines.

In the auto industry, the government has kick-started the electric vehicles (EV) charging ecosystem. EV owners can charge their vehicles at home or at the office from existing connections, and at domestic tariffs. The power ministry has revised guidelines and enabled land to be made available for establishing public charging stations (PCSs) under a revenue-sharing model between public entities and government. As long as standard protocols are met, a licence isn’t required for individuals and entities to set up a PCS meaning that the EV route may attract companies or land developers to put up PCSs. Battery is another segment: lithium, used to make these batteries, could be a scarcity. One hopes that battery makers may explore newer technologies to mass produce batteries and scale-down costs. Lithium disposal, a takeoff from the EV journey, can generate income and employment for many.  

EV manufacturing requires much more copper than the regular internal combustion engines; this could be a time for copper manufacturers and companies to meet the demand-supply of EVs.

Industry 4.0 and hyper connectivity are projected to make manufacturing smart and productive. Smart manufacturing will take into account the design and every aspect of the value chain, all of which can pave the way for accuracy, automation, real-time monitoring and dashboard updates in manufacturing. Data will be shared and leveraged by the various operations that go into manufacturing, and supply chains need to gear up for this. Along with the supply chains, the challenge is people and processes and whether they are equipped to handle such high-tech machines. No doubt the thrust is on improvising the products and producing them at tremendous speed, but this can happen if the workforce is skilled to adopt to a new order. Companies need to work backwards in order to move forward. For example, individuals at the design stage itself need to have the know-how of the technology involved; even those on the shop floor need to be equipped. Similarly, people representing the different stages of the value chain should be trained to keep pace with new changes. 

Open protocols are likely to become mainstream usage as people continue to work from home. That’s quite understandable as open protocols enable the transmission of data from one computer to all other computers in a loop.

Can we create innovation capabilities in emerging areas like cyber security, artificial intelligence and pharma? This could lead to a new generation of innovative products, solutions and services, and necessitate collaboration between companies and even countries.

Leer más



Home to 90 Unicorns, India is the world’s third-largest Unicorn hub. In finance terms, Unicorn refers to startup companies valued at over $1bn. As per the Hurun Global Unicorn Index for 2021, the US tops the list, followed by China, and then India, whose startup ecosystem is represented by approximately 60,000 startups. 

When we look at the startup community, most of the new ones are in the fintech and e-commerce space. That’s understandable as online retail and payment gateways are preferred modes of purchase ever since the first wave of the pandemic. Moreover, the data emerging from online purchases reveal consumer preferences, and so the data becomes an asset for retailers to tailor their offerings to customers accordingly. However, as Covid has spurred the need for online learning and teaching, ed-tech startups too are on the rise; these startups have created innovative solutions and large-scale employment.

In the recent Startup India Innovation Week, Prime Minister (PM) Narendra Modi revealed that the annual funding in the Indian startup space has registered a phenomenal growth. It has more than trebled, growing from around $11bn to $36bn within a year. Throwing light on the spread, the PM has said that there is at least one startup in every state spread over more than 625 districts. Nearly half the startups are operating from Tier 2 and Tier 3 cities. It means that several young people from these regions are able to sell their ideas, become startups and employ like-minded people. It would be great if some startups worked in the rural space for the last-mile-delivery consumer. It’s not just the essentials that could be a market here, as the aspiration level of the rural segment is growing; it’s a rainbow of expectations and may include consumer durables, fast food and devices.

What is interesting is that domestic and international investors have shown an inkling towards the drone sector, in sync with the Drone Policy. Drone companies have received orders worth about 5bn rupees ($67m, or £49m) from the Army, Navy and Air Force, while the government has leveraged large-scale usage of drones for mapping village properties for the SVAMITVA (Survey of Villages and Mapping with Improvised Technology in Village Areas) Scheme. Notwithstanding that, the skies have opened up opportunities for drone startups, many of which could tap agriculture and services like home delivery of medicines.

In the auto industry, the government has kick-started the electric vehicles (EV) charging ecosystem. EV owners can charge their vehicles at home or at the office from existing connections, and at domestic tariffs. The power ministry has revised guidelines and enabled land to be made available for establishing public charging stations (PCSs) under a revenue-sharing model between public entities and government. As long as standard protocols are met, a licence isn’t required for individuals and entities to set up a PCS meaning that the EV route may attract companies or land developers to put up PCSs. Battery is another segment: lithium, used to make these batteries, could be a scarcity. One hopes that battery makers may explore newer technologies to mass produce batteries and scale-down costs. Lithium disposal, a takeoff from the EV journey, can generate income and employment for many.  

EV manufacturing requires much more copper than the regular internal combustion engines; this could be a time for copper manufacturers and companies to meet the demand-supply of EVs.

Industry 4.0 and hyper connectivity are projected to make manufacturing smart and productive. Smart manufacturing will take into account the design and every aspect of the value chain, all of which can pave the way for accuracy, automation, real-time monitoring and dashboard updates in manufacturing. Data will be shared and leveraged by the various operations that go into manufacturing, and supply chains need to gear up for this. Along with the supply chains, the challenge is people and processes and whether they are equipped to handle such high-tech machines. No doubt the thrust is on improvising the products and producing them at tremendous speed, but this can happen if the workforce is skilled to adopt to a new order. Companies need to work backwards in order to move forward. For example, individuals at the design stage itself need to have the know-how of the technology involved; even those on the shop floor need to be equipped. Similarly, people representing the different stages of the value chain should be trained to keep pace with new changes. 

Open protocols are likely to become mainstream usage as people continue to work from home. That’s quite understandable as open protocols enable the transmission of data from one computer to all other computers in a loop.

Can we create innovation capabilities in emerging areas like cyber security, artificial intelligence and pharma? This could lead to a new generation of innovative products, solutions and services, and necessitate collaboration between companies and even countries.


Leer más

Publicado el 20/01/2022 en view from india, india, industry 4.0, startups

New Study – Seamless Connectivity Fuels Industrial Innovation



For many manufacturers, operational processes have historically been designed with the assumption that staff will be on-site all the time. The disruptions of the COVID-19 pandemic, along with the evolution of advanced network connectivity and connected assets and technologies, have accelerated long overdue modernisation and digitisation efforts across the manufacturing industry. Today and tomorrow´s manufacturing leaders are tossing aside processes designed to ossify cost control, efficiency, and predictability, and replacing them with those that emphasise flexibility, innovation, and resilience.

Forrester Consulting, on behalf on Analog Devices, evaluated the state of industrial modernisation, including the efforts to improve network reliability. 312 senior manufacturing leaders responsible for defining industrial connectivity strategies within their respective organisations were surveyed to explore this topic.

Download today to explore the unique insights and perspectives provided within this comprehensive study!

Download free report

E&T

Leer más



For many manufacturers, operational processes have historically been designed with the assumption that staff will be on-site all the time. The disruptions of the COVID-19 pandemic, along with the evolution of advanced network connectivity and connected assets and technologies, have accelerated long overdue modernisation and digitisation efforts across the manufacturing industry. Today and tomorrow´s manufacturing leaders are tossing aside processes designed to ossify cost control, efficiency, and predictability, and replacing them with those that emphasise flexibility, innovation, and resilience.

Forrester Consulting, on behalf on Analog Devices, evaluated the state of industrial modernisation, including the efforts to improve network reliability. 312 senior manufacturing leaders responsible for defining industrial connectivity strategies within their respective organisations were surveyed to explore this topic.

Download today to explore the unique insights and perspectives provided within this comprehensive study!

Download free report

E&T

Leer más

Publicado el 12/08/2021 en design and production, manufacturing, industry 4.0, sponsored

Busted: five myths that are blocking adoption of AI in manufacturing



For all the promise of digital transformation and the role artificial intelligence will play in driving the factories of the future, its adoption is still relatively nascent when it comes to much of the manufacturing sector. There are a number of reasons for this, not least a lack of understanding of what AI actually is and the changes it will bring. Separating the facts from the (science) fiction can be a challenge. Confusion, coupled with uncertainty, breeds fears and misconceptions, whether that’s around security risks; job losses; losing control, and what the technology can and cannot do.

Busting some of the most common myths can help set the record straight about AI and what it truly means for manufacturers.

Myth 1: AI is the end goal

There’s a common misconception that AI itself is a benefit. I’ve had countless conversations with customers who have misconstrued the fact that AI is a mechanism, not a benefit. I’ve heard “I’ll wait until it ‘does AI’” more times than I can count. The reality is that the benefit of artificial intelligence is not the process itself, but - like any kind of data analytics – the value of AI stems from its ability to solve problems faster, speeding up production. AI is the how, not the why. In the case of 3D printing, for example, technologies such as Markforged’s Blacksmith use closed-loop machine learning to correct for process deviations to ensure manufacturers get the right part every time, without the need for costly and time-consuming human inspection.

The second part of the AI equation is federated learning. Apple or Android smartphones use federated learning technology to improve with every text message typed based on how both individual and collective users interact with their keypads. Similarly, our network of 10,000+ securely connected 3D printers applies this AI technology to allow each machine to ‘get smarter’ with every print - all while maintaining the highest standards of customer data privacy, confidentiality and integrity. By analysing the data from the ‘fleet’ of printers, AI can spot corrections or tweaks that are being made regularly – for example where overhead angles or infill patterns aren’t quite right. These opportunities for improvements can then be fed back into the system, improving the collective output of the printers without the need for human intervention.

Myth 2: AI is not secure and relies on proprietary data

There is a misconception that because AI relies on data, it requires those using it to share their intellectual property to derive benefit from it. This is not the case. When it comes to AI in 3D printing, customer IP and part data stays separate within secure boundaries. It is not this proprietary information that feeds into the federated learning described above, but rather anonymised metadata. It is the information which is essentially collected into a ‘reservoir’ of data that allows the machines to learn and improve. It is impossible to recreate any of the source IP from the collective data.

However, security is still of the utmost importance when it comes to using AI, as with any data-powered technology. It is essential to ensure it is based on a secure platform with customer data integrity and confidentiality in place - an ISO 27001 certification is a great way to demonstrate you have invested in managing risk.

Myth 3: AI is always changing, making its outcomes unpredictable and unsuitable for achieving repeatability

For highly regulated industries such as aerospace, repeatability is paramount. When creating parts for aircraft, for example, the 10,000th printed part needs to be exactly the same as the very first. For this reason, AI – and specifically federated learning – is often dismissed by regulated industries. Its benefits of incremental learning and improvements are seen to be at odds with the stringent, life critical safety requirements.

Industries like aerospace where repeatability is required can still benefit from AI-driven technologies. It can be used for design iterations, helping to tweak and perfect aircraft parts in the early phases of development, for example. Once the team is happy with the parameters of the part, the system can then be ‘locked’ to ensure no further changes are made or data updates are incorporated from the fleet. At this stage, the technology can then be used as a verification tool to ensure there are no derivations in the printing process and that each part is exactly the same as the last.

In the longer term, the same technology will be able to ensure even greater repeatability by detecting and compensating for system behavioural changes like a lack of lubrication or wear and tear on machines.

Myth 4: AI will replace humans and take our jobs

This myth still resides very much in the world of science fiction. I say, let the machine take over when it comes to machine-related problems! Very few operators, engineers or industrial designers would complain if machines could ‘self-heal’, thus relieving them of mundane, troubleshooting tasks and letting them get on with their day-to-day jobs.

Rather than making us lazy or redundant, using AI and machine learning is helping to fuel innovation and smarter working practices. In manufacturing or product design, instead of focusing on more process-driven ‘what’ and ‘how’ issues, it allows engineers to ask the ‘why’ and ‘what if’ questions and to explore the implications of different scenarios when it comes to increasing efficiency or creating new products – ultimately leading to greater business opportunities.

Myth 5: The cost of AI is hindering its adoption

There are two common responses I hear when I’m speaking to customers about our AI-powered machines: “I can’t believe how affordable it is!” and “It costs too much!” As with any developing technology, there are those that can see the value it can provide and those that see it as an expensive luxury. We are starting to see this change as AI advances beyond the early adopter phase. Those championing AI-driven solutions on the factory floor focus on the value it can bring – essentially allowing machines to solve machine-related problems, freeing up engineers and operators to invest their efforts into innovation, product development and other human-related endeavours.

It is important to remember that many of these myths exist because not all AI is created equally. To be an effective tool, AI requires access to large amounts of data – machines can’t ‘learn’ without a steady stream of reliable data. Before you invest in any AI-driven technology make sure it has a reliable data source that can scale along with the machines it is powering.

Ted Plummer is principal product manager and resident AI expert at industrial 3D printing company, Markforged

Leer más



For all the promise of digital transformation and the role artificial intelligence will play in driving the factories of the future, its adoption is still relatively nascent when it comes to much of the manufacturing sector. There are a number of reasons for this, not least a lack of understanding of what AI actually is and the changes it will bring. Separating the facts from the (science) fiction can be a challenge. Confusion, coupled with uncertainty, breeds fears and misconceptions, whether that’s around security risks; job losses; losing control, and what the technology can and cannot do.

Busting some of the most common myths can help set the record straight about AI and what it truly means for manufacturers.

Myth 1: AI is the end goal

There’s a common misconception that AI itself is a benefit. I’ve had countless conversations with customers who have misconstrued the fact that AI is a mechanism, not a benefit. I’ve heard “I’ll wait until it ‘does AI’” more times than I can count. The reality is that the benefit of artificial intelligence is not the process itself, but - like any kind of data analytics – the value of AI stems from its ability to solve problems faster, speeding up production. AI is the how, not the why. In the case of 3D printing, for example, technologies such as Markforged’s Blacksmith use closed-loop machine learning to correct for process deviations to ensure manufacturers get the right part every time, without the need for costly and time-consuming human inspection.

The second part of the AI equation is federated learning. Apple or Android smartphones use federated learning technology to improve with every text message typed based on how both individual and collective users interact with their keypads. Similarly, our network of 10,000+ securely connected 3D printers applies this AI technology to allow each machine to ‘get smarter’ with every print - all while maintaining the highest standards of customer data privacy, confidentiality and integrity. By analysing the data from the ‘fleet’ of printers, AI can spot corrections or tweaks that are being made regularly – for example where overhead angles or infill patterns aren’t quite right. These opportunities for improvements can then be fed back into the system, improving the collective output of the printers without the need for human intervention.

Myth 2: AI is not secure and relies on proprietary data

There is a misconception that because AI relies on data, it requires those using it to share their intellectual property to derive benefit from it. This is not the case. When it comes to AI in 3D printing, customer IP and part data stays separate within secure boundaries. It is not this proprietary information that feeds into the federated learning described above, but rather anonymised metadata. It is the information which is essentially collected into a ‘reservoir’ of data that allows the machines to learn and improve. It is impossible to recreate any of the source IP from the collective data.

However, security is still of the utmost importance when it comes to using AI, as with any data-powered technology. It is essential to ensure it is based on a secure platform with customer data integrity and confidentiality in place - an ISO 27001 certification is a great way to demonstrate you have invested in managing risk.

Myth 3: AI is always changing, making its outcomes unpredictable and unsuitable for achieving repeatability

For highly regulated industries such as aerospace, repeatability is paramount. When creating parts for aircraft, for example, the 10,000th printed part needs to be exactly the same as the very first. For this reason, AI – and specifically federated learning – is often dismissed by regulated industries. Its benefits of incremental learning and improvements are seen to be at odds with the stringent, life critical safety requirements.

Industries like aerospace where repeatability is required can still benefit from AI-driven technologies. It can be used for design iterations, helping to tweak and perfect aircraft parts in the early phases of development, for example. Once the team is happy with the parameters of the part, the system can then be ‘locked’ to ensure no further changes are made or data updates are incorporated from the fleet. At this stage, the technology can then be used as a verification tool to ensure there are no derivations in the printing process and that each part is exactly the same as the last.

In the longer term, the same technology will be able to ensure even greater repeatability by detecting and compensating for system behavioural changes like a lack of lubrication or wear and tear on machines.

Myth 4: AI will replace humans and take our jobs

This myth still resides very much in the world of science fiction. I say, let the machine take over when it comes to machine-related problems! Very few operators, engineers or industrial designers would complain if machines could ‘self-heal’, thus relieving them of mundane, troubleshooting tasks and letting them get on with their day-to-day jobs.

Rather than making us lazy or redundant, using AI and machine learning is helping to fuel innovation and smarter working practices. In manufacturing or product design, instead of focusing on more process-driven ‘what’ and ‘how’ issues, it allows engineers to ask the ‘why’ and ‘what if’ questions and to explore the implications of different scenarios when it comes to increasing efficiency or creating new products – ultimately leading to greater business opportunities.

Myth 5: The cost of AI is hindering its adoption

There are two common responses I hear when I’m speaking to customers about our AI-powered machines: “I can’t believe how affordable it is!” and “It costs too much!” As with any developing technology, there are those that can see the value it can provide and those that see it as an expensive luxury. We are starting to see this change as AI advances beyond the early adopter phase. Those championing AI-driven solutions on the factory floor focus on the value it can bring – essentially allowing machines to solve machine-related problems, freeing up engineers and operators to invest their efforts into innovation, product development and other human-related endeavours.

It is important to remember that many of these myths exist because not all AI is created equally. To be an effective tool, AI requires access to large amounts of data – machines can’t ‘learn’ without a steady stream of reliable data. Before you invest in any AI-driven technology make sure it has a reliable data source that can scale along with the machines it is powering.

Ted Plummer is principal product manager and resident AI expert at industrial 3D printing company, Markforged


Leer más

Publicado el 12/08/2021 en comment, industry 4.0, artificial intelligence, manufacturing, future of manufacturing

It’s time to turn digital transformation from buzzword into reality



The last year has affected our health, social interactions and businesses. From work calls to grocery shopping, every aspect of our lives has been forced online. While we have yet to move out of the pandemic, we are now at a point where we can look back at the dramatic speed of change in the last year and a half, and the impressive rate at which businesses adapted to the ‘new normal’. Indeed, many businesses are questioning how they could ever go back to their old ways of working.

The last year or so has highlighted the need to be agile and adaptable and those that have embraced these principles are well positioned to deal with the unexpected in the future. There are a number of factors that go into making fundamental change a success, from having a culture that is open to new ways of working, to looking at the technological building blocks of a truly connected enterprise.

Implementing change is not just about cutting costs on the balance sheet or making minor optimisations to your business processes. Only by approaching transformation at a fundamental level can businesses make improvements that will last for years to come.

2020 was the year that digital transformation, forever a buzzword, finally began to mean something. Delays to transformation initiatives have limited their impact in the past, but the pandemic required businesses in every industry to quickly put measures in place in response to changing circumstances.

The result has shocked digital transformation into action, with digital businesses proven to be more resilient, more agile and more efficient. Forrester estimates that prior to 2020, only around 15 per cent of companies were ‘digitally savvy’, but the new realities have pushed through technological change at unprecedented speeds – McKinsey suggests digital adoption leapt five years ahead in just eight weeks, while Microsoft estimated two years of transformation occurred in two months. These investments gave businesses the tools they needed to behave in new ways and, with 97 per cent of IT leaders likely to continue 2020’s initiatives into 2021, this transformation is likely here to stay.

Changing consumer behaviour has, unsurprisingly, been a key factor fuelling much of this change. The digital investments of 2020 raised the bar for excellent digitally enabled services, which are now seen as the norm rather than a nice-to-have. Banking is a prime example of an industry pushed into transformation as a result of our changing lifestyles – the pandemic is accelerating the move towards a cashless society, and ModularBank estimates 90 per cent of UK customers now see technology as important when selecting a bank. While the growth of app-led challengers won’t displace traditional banks any time soon, their digital capabilities will steal away some customers. Every business should take heed from competitors who are making strides in digital – once customers get a taste for great experiences, they will seek them out.

The key thing is for companies to channel and focus their digital investments and automate as much of the management and maintenance of new technologies as possible. This will allow the experts to focus on using technology to innovate, which is how real transformation will happen. 

With so much talk around transformation, you could be forgiven for thinking of 2021 as the year quantum computing makes waves. But as ‘big bang’ as quantum could be, it’s not the technological innovation most companies need right now. Not only does this kind of big innovation take time to trickle down, but after a year of unexpected challenges, most companies need stability and resilience, rather than wholesale change.

What’s more, the ever-changing landscape of health, social and geo-political uncertainty will likely bring more unpredictable challenges over the next couple of years. It will therefore be crucial for businesses to focus on building resilience. A key starting point will be reducing the number of dependencies a business is reliant on.

Dependencies can be found in many forms. For example, the pandemic exposed dependencies in supply chains, as companies which were overly reliant on a single supplier for essential components faced huge risks – here, Apple was one of many companies which ran into shortages as a result of disruption to deliveries. Looking into the future, exactly how the UK’s departure from the EU will impact existing agreements on data regulation, or how international policies from the USA and China will impact global trade and technology standards, remains to be seen. The point is that any business overly dependent on data or technology from one country is taking a risk, but the agility gained from solid digital investments will help to resolve these issues.

Limiting the number of dependencies is reliant on a business having the means to identify them, which is easier to do if the whole infrastructure is connected. That said, it is still possible to run risk assessments on fragmented infrastructures with process discovery and architecture management tools. Essentially, having an accurate view of the facts enables businesses to make changes to reduce dependencies, allowing core operations to be shifted elsewhere if any part of a business is compromised. While such changes would have previously taken a long time to implement, the driving forces for technology have changed. With businesses still facing a period of uncertainty, there are compelling factors for a new approach; one where the complexity is removed from the process and companies stay focused on building their own agility and resilience. 

The months ahead will undoubtedly throw more uncertainties and more crises at our businesses. What remains to be seen is whether firms will make the necessary investments in digital foundations now to weather this storm. Building resilience into your business may seem like a large cost when the skies are clear, but in the long run every business can benefit from being prepared.

Sanjay Brahmawar is CEO at Software AG.

Leer más



The last year has affected our health, social interactions and businesses. From work calls to grocery shopping, every aspect of our lives has been forced online. While we have yet to move out of the pandemic, we are now at a point where we can look back at the dramatic speed of change in the last year and a half, and the impressive rate at which businesses adapted to the ‘new normal’. Indeed, many businesses are questioning how they could ever go back to their old ways of working.

The last year or so has highlighted the need to be agile and adaptable and those that have embraced these principles are well positioned to deal with the unexpected in the future. There are a number of factors that go into making fundamental change a success, from having a culture that is open to new ways of working, to looking at the technological building blocks of a truly connected enterprise.

Implementing change is not just about cutting costs on the balance sheet or making minor optimisations to your business processes. Only by approaching transformation at a fundamental level can businesses make improvements that will last for years to come.

2020 was the year that digital transformation, forever a buzzword, finally began to mean something. Delays to transformation initiatives have limited their impact in the past, but the pandemic required businesses in every industry to quickly put measures in place in response to changing circumstances.

The result has shocked digital transformation into action, with digital businesses proven to be more resilient, more agile and more efficient. Forrester estimates that prior to 2020, only around 15 per cent of companies were ‘digitally savvy’, but the new realities have pushed through technological change at unprecedented speeds – McKinsey suggests digital adoption leapt five years ahead in just eight weeks, while Microsoft estimated two years of transformation occurred in two months. These investments gave businesses the tools they needed to behave in new ways and, with 97 per cent of IT leaders likely to continue 2020’s initiatives into 2021, this transformation is likely here to stay.

Changing consumer behaviour has, unsurprisingly, been a key factor fuelling much of this change. The digital investments of 2020 raised the bar for excellent digitally enabled services, which are now seen as the norm rather than a nice-to-have. Banking is a prime example of an industry pushed into transformation as a result of our changing lifestyles – the pandemic is accelerating the move towards a cashless society, and ModularBank estimates 90 per cent of UK customers now see technology as important when selecting a bank. While the growth of app-led challengers won’t displace traditional banks any time soon, their digital capabilities will steal away some customers. Every business should take heed from competitors who are making strides in digital – once customers get a taste for great experiences, they will seek them out.

The key thing is for companies to channel and focus their digital investments and automate as much of the management and maintenance of new technologies as possible. This will allow the experts to focus on using technology to innovate, which is how real transformation will happen. 

With so much talk around transformation, you could be forgiven for thinking of 2021 as the year quantum computing makes waves. But as ‘big bang’ as quantum could be, it’s not the technological innovation most companies need right now. Not only does this kind of big innovation take time to trickle down, but after a year of unexpected challenges, most companies need stability and resilience, rather than wholesale change.

What’s more, the ever-changing landscape of health, social and geo-political uncertainty will likely bring more unpredictable challenges over the next couple of years. It will therefore be crucial for businesses to focus on building resilience. A key starting point will be reducing the number of dependencies a business is reliant on.

Dependencies can be found in many forms. For example, the pandemic exposed dependencies in supply chains, as companies which were overly reliant on a single supplier for essential components faced huge risks – here, Apple was one of many companies which ran into shortages as a result of disruption to deliveries. Looking into the future, exactly how the UK’s departure from the EU will impact existing agreements on data regulation, or how international policies from the USA and China will impact global trade and technology standards, remains to be seen. The point is that any business overly dependent on data or technology from one country is taking a risk, but the agility gained from solid digital investments will help to resolve these issues.

Limiting the number of dependencies is reliant on a business having the means to identify them, which is easier to do if the whole infrastructure is connected. That said, it is still possible to run risk assessments on fragmented infrastructures with process discovery and architecture management tools. Essentially, having an accurate view of the facts enables businesses to make changes to reduce dependencies, allowing core operations to be shifted elsewhere if any part of a business is compromised. While such changes would have previously taken a long time to implement, the driving forces for technology have changed. With businesses still facing a period of uncertainty, there are compelling factors for a new approach; one where the complexity is removed from the process and companies stay focused on building their own agility and resilience. 

The months ahead will undoubtedly throw more uncertainties and more crises at our businesses. What remains to be seen is whether firms will make the necessary investments in digital foundations now to weather this storm. Building resilience into your business may seem like a large cost when the skies are clear, but in the long run every business can benefit from being prepared.

Sanjay Brahmawar is CEO at Software AG.


Leer más

Publicado el 02/07/2021 en comment, digitalization, digital manufacturing, industry 4.0, coronavirus

View from India: Hyper connectivity for better outcomes



When we look at the manufacturing scenario, the emphasis has always been on mass production. Though the focus remains pretty much the same, robots and automation are add-ons to the manufacturing ecosystem. They have brought precision and speed to the product development cycle. The latest is Industry 4.0, which will digitally transform shop floors.

Industry 4.0 will help in predictions through real-time data. Through Industry 4.0, machine-learning (ML) tools can be integrated into production and machines will become intelligent and take informed decisions. Manual processes will be replaced by automation.

“The entire value chain of manufacturing becomes smart and automated through Industry 4.0. Right from conceptualisation-design-execution, every stage of the shop floor is a value add in terms of output,” said Syam Sunder, vice president of Engineering Convergence, Hexagon Manufacturing Intelligence Division, India, speaking at the CII webinar 'People and Process Focused Digital Transformation of the Shop Floor', in collaboration with Hexagon.

The digital transformation of shop floors will help organisations work towards a return on investment. Early adopters of smart manufacturing have integrated automation, real-time monitoring and dashboard updates. What seems most obvious is that the output of machines is faster. But that’s not enough.

What is missing is the element of people and processes, which need to go hand in hand with the digital transformation. Improved efficiency, enhanced quality, reduced cost and improved safety and sustainability can be seen as the value coming from people- and processes-focused digital transformation.

A trained workforce is required to execute smart processes to make globally competent products: it’s essential to tap unused talent. Skilled professionals and faster adoption of technology are required to scale-up efficiencies and fine tune the product line. Digital training can happen through augmented-reality (AR) and virtual-reality (VR) streaming videos. AR-VR videos can also be used to connect to staff working in different units across the manufacturing facility, while manufacturers can improvise field operations by regularly monitoring AR.

Nevertheless, the journey is not smooth. Pain points come in the form of siloed operations, which need to break down into a seamless one for enabling large-scale automation. Enterprise integration, artificial intelligence and edge computing will determine the operations. These technologies will also replace paperwork and facilitate smart data governance. They will also connect people and processes digitally, besides customising solutions for clients. All this can be achieved by combining physical and digital operations to give insights through data analytics, which will be derived through the machine, suppliers and vendors. Data can also be used for managing the machines and identifying the bottlenecks; if the data is stored in the edge computing system, then it saves bandwidth. A database or multiple databases can be created and tapped intelligently to meet customer needs. This leads to hyper connectivity.

Hyper connectivity and Industry 4.0 will help in data optimisation and improve capabilities to meet new requirements. “We are getting into a new world, where data is being leveraged for several applications. Open protocols are becoming common, as data is being transmitted from one individual or a team to many colleagues through a common connection, which we identify as hyper connectivity,” explained Sunder.

Upon implementation, all these technologies lead to a more digital and connected scenario, where people and processes are integrated into the shop floor. This ensures ubiquitous visibility, robust traceability, compliance to processes and automated outputs. It also points to a situation where data can be captured in a simple way, apart from enabling collaboration between silos (if any).

The manufacturing world is moving from automation to a connected world. The journey began with mechanisation, which then evolved into industrialisation, mass production, automated processes, hyper connectivity and, finally, autonomy, perceived as the ultimate form of putting data to work.

Leer más



When we look at the manufacturing scenario, the emphasis has always been on mass production. Though the focus remains pretty much the same, robots and automation are add-ons to the manufacturing ecosystem. They have brought precision and speed to the product development cycle. The latest is Industry 4.0, which will digitally transform shop floors.

Industry 4.0 will help in predictions through real-time data. Through Industry 4.0, machine-learning (ML) tools can be integrated into production and machines will become intelligent and take informed decisions. Manual processes will be replaced by automation.

“The entire value chain of manufacturing becomes smart and automated through Industry 4.0. Right from conceptualisation-design-execution, every stage of the shop floor is a value add in terms of output,” said Syam Sunder, vice president of Engineering Convergence, Hexagon Manufacturing Intelligence Division, India, speaking at the CII webinar 'People and Process Focused Digital Transformation of the Shop Floor', in collaboration with Hexagon.

The digital transformation of shop floors will help organisations work towards a return on investment. Early adopters of smart manufacturing have integrated automation, real-time monitoring and dashboard updates. What seems most obvious is that the output of machines is faster. But that’s not enough.

What is missing is the element of people and processes, which need to go hand in hand with the digital transformation. Improved efficiency, enhanced quality, reduced cost and improved safety and sustainability can be seen as the value coming from people- and processes-focused digital transformation.

A trained workforce is required to execute smart processes to make globally competent products: it’s essential to tap unused talent. Skilled professionals and faster adoption of technology are required to scale-up efficiencies and fine tune the product line. Digital training can happen through augmented-reality (AR) and virtual-reality (VR) streaming videos. AR-VR videos can also be used to connect to staff working in different units across the manufacturing facility, while manufacturers can improvise field operations by regularly monitoring AR.

Nevertheless, the journey is not smooth. Pain points come in the form of siloed operations, which need to break down into a seamless one for enabling large-scale automation. Enterprise integration, artificial intelligence and edge computing will determine the operations. These technologies will also replace paperwork and facilitate smart data governance. They will also connect people and processes digitally, besides customising solutions for clients. All this can be achieved by combining physical and digital operations to give insights through data analytics, which will be derived through the machine, suppliers and vendors. Data can also be used for managing the machines and identifying the bottlenecks; if the data is stored in the edge computing system, then it saves bandwidth. A database or multiple databases can be created and tapped intelligently to meet customer needs. This leads to hyper connectivity.

Hyper connectivity and Industry 4.0 will help in data optimisation and improve capabilities to meet new requirements. “We are getting into a new world, where data is being leveraged for several applications. Open protocols are becoming common, as data is being transmitted from one individual or a team to many colleagues through a common connection, which we identify as hyper connectivity,” explained Sunder.

Upon implementation, all these technologies lead to a more digital and connected scenario, where people and processes are integrated into the shop floor. This ensures ubiquitous visibility, robust traceability, compliance to processes and automated outputs. It also points to a situation where data can be captured in a simple way, apart from enabling collaboration between silos (if any).

The manufacturing world is moving from automation to a connected world. The journey began with mechanisation, which then evolved into industrialisation, mass production, automated processes, hyper connectivity and, finally, autonomy, perceived as the ultimate form of putting data to work.


Leer más

Publicado el 01/07/2021 en india, view from india, manufacturing, digital manufacturing, automation, industry 4.0

NUESTROS PRODUCTOS

¡Lo más vendido!

DOP-110WS
DOP-110WS DOP-110WS Delta Electronics DE Serie DOP 1xx 971,11€ 534,11€

Serie DOP 1xx - 10 inch 1024*600 ETHERNET

1 Opinión

DPM-C530A
A. Redes DPM-C530A DPM-C530A Delta Electronics Analizador de redes 379,74€ 208,86€

Analizador de redes Avanzado

  ¿Necesitas ayuda?  Envíanos un Whatsapp:
Whatsapp +34639173632   De Lunes a Viernes  9h a 14h

buScas Y NO ENCUENTRAS sales@megaindustrial.shop

  buscamos representantes
sales@megaindustrial.shop