DIGITAL TRANSFORMATION
Digital Transformation uses technologies such as cloud computing and artificial intelligence to redesign business processes across various sectors, rapidly changing everyday and industrial life while requiring new solutions and increased cybersecurity.
The concept of Digital Transformation refers to the processes of change at the level of companies and organizations through the specific use of digital technologies to redesign their own value creation processes. These technologies include quantum computing, cloud computing or edge computing, digital platforms, the Internet of Things, blockchain, artificial intelligence, virtual reality, and more.
This transformation affects numerous sectors of activity, from logistics to energy, agri-food, telecommunications, financial services, manufacturing, healthcare, and education, among others. The evolution and increasing adoption of these technologies are rapidly changing the daily lives of people and society, as well as aspects related to industry.
Digital Transformation not only enables the creation of new products and services but also requires new responses and technological solutions: topics such as smart and communication networks, robust data infrastructure, and the highest possible cybersecurity in the economy.
COSTUMERS IN DIGITAL TRANSFORMATION
TECHNOLOGIES APPLIED IN DIGITAL TRANSFORMATION
Electronic product development.
Digital technology is transforming not only how companies manufacture products but also the products themselves. Today, it is clear that we have entered a new era of products—an era of smart and connected products. These are products equipped with embedded sensor capabilities, software, connectivity, and even artificial intelligence. This era is likely to offer extraordinary opportunities for businesses and their customers.
However, so far, only a few companies are fully leveraging these opportunities. Most continue to adhere to conventional methods of manufacturing, distributing, and using products, missing out on the vast potential that new digital technologies can unlock.
Manufacturers of products such as automobiles, devices, heavy and industrial machinery, and software will design solutions to deliver highly personalized experiences through digital and other services that can be tailored to customers’ needs. But this requires manufacturers to prepare for and embrace change, reinventing their products to enhance their value and grow their businesses.
This represents a significant step toward digitalization and a shift in how companies think about, design, manufacture, and sell products. Until now, the focus of digitalization in industries has primarily been on optimizing and increasing the efficiency of front- and back-office operations in areas such as marketing and sales, planning, procurement, finance, human resources, and IT. However, this focus is evolving.
The change is driven by technologies like the Internet of Things (IoT), sensors, edge computing, and cloud computing, which enable engineers to integrate computing power, connectivity, and software into almost any hardware product. This, in turn, supports the development of hyper-personalized product experiences and offerings such as “as-a-service” solutions—all built on digital technologies.
5G Network Applications
The 5G wireless communication standard will be a game changer, not only in the consumer sector but also in the industrial sector. Industrial customers are already benefiting—or soon will—from fifth-generation mobile communications. 5G technology has the potential to transform the industrial landscape due to the following advantages:
- Low latency: Latency in a 5G network is less than 5 milliseconds (ms), compared to 15 to 80 ms for 4G (LTE).
- High data speeds: With 5G, data transfer rates can reach up to 10 Gbps (1.25 gigabytes per second). By contrast, average internet access in Germany currently allows data transfer rates of 50 to 100 Mbps (6.25 to 12.5 MB per second).
- Extreme reliability: The failure rates for 5G are very low, with reliability estimated at up to 99.999%.
- Energy efficiency: 5G network batteries are expected to last up to 10 years.
- Support for many devices: Up to one million devices can operate within a single square kilometer.
- High precision: Both stationary and moving objects can be located with an accuracy of less than 10 centimeters.
The new 5G mobile communication standard forms the foundation for digitalization and the full interconnection of all areas of life, particularly in industry. While the benefits of 5G for consumers are often discussed, the applications that stand to gain the most are in IoT and Industry 5.0. Specifically, areas such as production automation, optimized control of robots and machines, smart maintenance, and the overall interconnection and management of production, facilities, warehouses, and logistics will see significant advantages.
Moreover, 5G will make autonomous driving a reality. A prerequisite for this is real-time direct communication between vehicles and the high availability of mobile data connections. With 5G, not only are vehicles themselves intelligently interconnected, but so are all road users (V2X/Vehicle-to-Everything). 5G mobile connections work reliably in moving vehicles, even at speeds of up to 500 km/h.
In urban areas, the high communication density enabled by 5G is particularly remarkable. Up to one million devices can send and receive data simultaneously within a square kilometer. This benefits densely populated urban areas as well as large events, where tens of thousands of users need to be served simultaneously in a limited space.
Immersive Solutions and Generative AI
Chatbots have become an essential tool in the digital era, enhancing communication and efficiency across various sectors. These artificial intelligence (AI) systems automate user interactions through text-based chat interfaces, simplifying access to information and services. In this context, GPT-4 models are gaining prominence, marking the first time in history that a chatbot can hold a conversation, understand users, and generate coherent texts. Adding immersive environments to these capabilities takes the experience to the next level.
Chatbots rely on a combination of AI technologies and natural language processing (NLP) algorithms that enable smooth communication and task execution based on user requests. Their utility becomes evident when integrated with applications and services such as websites, instant messaging apps, or customer support systems.
These chatbots can also include Speech-to-Text (STT) modules, which convert audio input from users into text, and Text-to-Speech (TTS) modules, which synthesize voice responses. In this setup, chatbots generate text-based replies, which are then converted into audio, providing spoken responses to users.
This technology is evident in virtual assistants like Alexa or Siri. But advancements have pushed boundaries further. Recently, many services have emerged that create hyper-realistic digital avatars or personas with lipsync modules, synchronizing the avatar’s lip movements with generated audio. By giving chatbots a face, the experience becomes even more natural and human-like.
By integrating these digital avatars or characters into platforms like Unity or Unreal Engine, developers can create applications that incorporate 3D models, spatial sound, and advanced interaction systems. These technologies enable virtual beings to interact with users in more realistic and natural ways. Additionally, these platforms ensure compatibility across devices such as mobile phones, PCs, and AR/VR systems, including Meta Quest, HTC Vive, ARCore/ARKit-compatible mobile devices, and mixed-reality devices like Hololens.
Although these technologies were already available, the arrival of ChatGPT and GPT-4, OpenAI’s advanced language model, has significantly transformed the capabilities and applications of chatbots. The use of GPT-4 in creating virtual beings enables the generation of contextual dialogues and responses, resulting in more human-like and realistic interactions. Moreover, GPT-4’s ability to autonomously learn and adapt to specific situations allows these virtual beings to enhance their performance over time and tailor interactions to individual users.
With these technologies, we can develop various virtual agents for a wide range of roles. Some are familiar roles that will see substantial improvements with GPT-4, while others are entirely new. For instance, virtual assistants can now be deployed in industries such as manufacturing, healthcare, tourism, hospitality, and more, enabling unprecedented levels of personalized and efficient service.
Artificial Intelligence
Artificial Intelligence (AI) has become a common term in our lives. It is a technology that aims to develop knowledge enabling systems to perform functions that were once exclusive to humans, differing from past systems that relied solely on predefined or programmed decisions. The primary goal of AI is to provide algorithms capable of executing functions that only humans could previously perform.
On one side, there are those who develop algorithms to address specific problems, and on the other, those who use third-party algorithms and must thoroughly understand their development and functionality to determine their appropriate use: the analysts. Data analysts utilize algorithms designed by others to classify, optimize, predict, detect patterns, select attributes, and more. Meanwhile, algorithm developers focus on researching new methods to achieve better adjustments, higher accuracy rates, and improved outcomes.
Projects like OpenAI are private, non-profit organizations that create free and open AI projects for everyone, with a mission to ensure that AI benefits all of humanity. Current examples of AI tools making headlines and spanning various sectors include:
- DALLE-2: An AI capable of creating realistic images and artwork from natural language descriptions.
- GPT (Generative Pretrained Transformer): An AI that generates human-like content using natural language.
- Murf: An AI that converts text into audio for voice generation.
- AIVA: An AI that composes soundtrack music.
- Midjourney: An AI for creating images from textual descriptions.
- Whisper: An AI for audio-to-text transcription, supporting multiple languages and including translation to English.
- Stable Diffusion: An AI that generates photorealistic images from text inputs.
- NeRF: An AI for rendering 3D images from 2D photos.
- D-ID: An AI for creating talking avatars.
The use of these tools offers multiple advantages, including process automation, greater accuracy, reduced human errors, and cost and time optimization. However, it is crucial to remain critical and understand the challenges encountered during their implementation and development.
Predictive maintenance using AI techniques has highlighted the critical role of time series data in identifying the best solutions to address damages or threats. Prediction, classification, diagnosis, and the activation of corrective measures to prevent significant damage are central to this process. Until a few years ago, recurrent neural networks (RNNs) were commonly used to solve these problems. However, the emergence of Long Short-Term Memory (LSTM) networks is gaining prominence due to their superior long-term learning capabilities.
Digital Twins
The Digital Twin is an exact replica of a physical system throughout its entire lifecycle, from initial conception and design through implementation and operation to eventual evolution. Digital twins utilize data collected from various equipment and sensors, both in real-time and historically, with the goal not only of monitoring the physical system they represent but also of simulating, analyzing, and efficiently predicting its behavior and performance. Advances in task automation and the use of digital twins for decision-making are among the most promising technological tools in the industry today. In this context and others, it is essential to develop knowledge for implementing digital twins that represent real systems, enabling simulation of their behavior and facilitating informed decision-making.
Some of the advantages offered by digital twins include:
- Improved performance: They provide real-time insights to optimize the performance of equipment and facilities, addressing issues as they arise and minimizing downtime.
- Predictive capabilities: Digital twins offer a comprehensive view of facilities, identifying issues or failures as they occur and enabling preventive actions before complete failures happen.
- Faster production timelines: By creating digital replicas, scenarios can be tested to anticipate and resolve problems before actual production, improving production processes.
- Remote monitoring: Their virtual nature facilitates online supervision and remote control of facilities, reducing the need for physical presence in potentially hazardous environments.
- Traceability of processes, resources, and material flow: Full control over the plant’s operations.
Digital twin technology is rapidly advancing, leveraging other enabling technologies such as artificial intelligence, the Internet of Things (IoT), and machine learning, which accelerate the potential of digital twins to transform the industry.
IoT Applications, Architecture, and Data Management
Generative AI and Cybersecurity
Generative AI tools use our conversations to improve and train their models. Often, users are unaware of this, and our privacy can be compromised if the tools are not used responsibly. The tool may learn from our personal data and use it to generate responses for other users, leading to the potential leakage of private information.
Therefore, it is important to deactivate model training improvements. It will be necessary to look for the option to do so in the settings of the tools we use. For example, the following image shows how we can do this in ChatGPT.
Even if we deactivate model training, it is crucial to limit the information we provide to generative AI tools. We should provide only the minimum necessary information to make our inquiry, avoiding revealing additional or unnecessary details.
As always, special attention must be given to confidential or sensitive information, such as personal data. We should avoid entering this type of data into the tool to prevent potential future leaks.
It is also essential that employees are aware of other dangers associated with generative AI tools. For instance, malicious browser extensions might appear as a way to facilitate the use of ChatGPT, or fake websites or apps may impersonate ChatGPT. Their aim is to infect our devices or collect our passwords.
Finally, it is important to recognize the misuse of generative AI. The creation of false content, identity impersonation (e.g., with deepfakes), or the manipulation of opinions are some of the abuses of this technology.
In conclusion, generative AI tools can have a significant impact on the productivity of individuals and organizations. However, it is essential to be aware of the risks associated with irresponsible use of the technology and to understand the measures we can take to minimize these risks.
Quantum computing
At the frontier of the computing world lies an emerging paradigm known as quantum computing. Quantum Theory has enabled some of the greatest technological advancements of the past century, from the creation of lasers and medical Magnetic Resonance Imaging (MRI) to the entire semiconductor physics that underpins modern electronics. Additionally, Quantum Theory is now enabling the development of quantum computing.
Quantum computing represents a new way of performing calculations using the rules of quantum physics. Instead of using traditional bits like classical computers, which can be either 0 or 1, it uses quantum bits or qubits, which can be 0, 1, or both at the same time. This allows for solving extremely complex problems much faster than current computers can. Imagine a supercomputer that can test all possible solutions to a problem at once—that’s what quantum computing does.
This type of computing has the potential to revolutionize many sectors. It helps optimize operations that classical computing cannot handle. In the chemical industry, for example, it can speed up the design of new materials and medicines. In logistics, it optimizes delivery routes more efficiently. In energy, it improves the distribution and design of materials for solar energy. This is just the beginning; quantum computing promises to solve complex problems across various fields. Its advancement is creating a new paradigm in information processing, with immense potential to transform industries.
Immersive and collaborative robotics
Cutting-edge technologies in smart manufacturing have presented promising opportunities for utilizing collaborative teleoperation between humans and robots in customized manufacturing tasks. To effectively harness the creative capabilities of humans while benefiting from the efficiency and stability of robots, it is crucial to provide an intuitive teleoperation interface. However, current teleoperation systems still face limitations in terms of intuitive operability. An immersive virtual reality (VR)-based teleoperation system offers operators a more intuitive platform for controlling robots, thereby facilitating customized manufacturing processes.
Significant advancements in VR technology have led to substantial improvements in the performance and usability of virtual reality equipment and its accessories. By leveraging these features, VR devices can be integrated into teleoperation systems to reduce operators’ learning costs and provide a more immersive teleoperation experience.
Collaborative robots (cobots) are currently highly valuable allies in the industry. For the partnership between humans and these robots to function effectively, these technological entities must be programmed to work efficiently. Typically, cobots are equipped with various types of sensors to facilitate their work. In this way, to ensure the safety of human workers, they are equipped with tools that allow them to detect when a person is nearby, ensuring they do not harm the worker.
Moreover, the software that powers these cobots enables machine learning so that they can identify and understand the work environment in which they operate. It is in the design of this software for programming cobots where developers must ensure that the interactions between these robots and the human environment are neither problematic nor risky for the human worker. Currently, cobots can be found inside cages, like traditional robots, but there are also cobots that can move from one station to another. However, a common feature among them all is the safety of their interaction with humans.
Despite these technological advancements, which have enabled both large and medium-sized companies, and even some small ones, to incorporate these cobots into their production lines, much work remains to be done. This work will focus on refining these new tools to improve their effectiveness and the relationship established between these robots and people.
Neuromorphic engineering
Neuromorphic engineering and the synthesis of software algorithms into hardware are propelling artificial intelligence to a new level of efficiency and speed. In some cases, this approach achieves processing speeds over a thousand times faster, providing companies with a significant competitive edge in applications related to high-risk systems and data processing where latency is critical.
The scientific community is currently revisiting past concepts and advocating for solutions in edge computing, which involves direct processing on the machine, sensor, or device itself. This represents an evolution of cyber-physical systems, where computation is designed to occur within the system rather than on a separate platform. Additionally, FPGA systems (programmable devices with logic blocks), which experienced a peak in popularity before being overshadowed by microprocessors, ARM systems (electronics with computational capabilities and reduced instruction sets), or DSPs (digital signal processors), are regaining prominence.
New models are being developed based on neural networks, such as SNNs (spiking neural networks), which constitute the third generation of artificial neural networks. These networks are inspired by the actual behavior of the brain and enable continuous learning. Beyond classification and behavioral prediction, SNNs introduce discretization in inputs and plasticity in parameter adjustments.
The integration of hardware and software systems into blocks synthesized into ASICs or directly within FPGAs will facilitate the development of true neuromorphic systems. These systems, based on analog circuits, are designed to emulate the neurobiological structures of the nervous system.
Blockchain
In today’s digital age, technology has radically transformed the way we interact with the world around us. One of the most revolutionary advancements to emerge in the last decade is blockchain. This concept, initially linked to cryptocurrencies like Bitcoin, has evolved into a key catalyst for innovation across various sectors.
Blockchain is a decentralized and distributed ledger that allows the creation of an interconnected chain of blocks, each storing information securely and transparently. Unlike traditional databases, where information is stored in a single location, blockchain decentralizes the data, making it resistant to manipulation and cyberattacks.
One of blockchain’s greatest contributions is its ability to redefine trust in transactions. Instead of relying on intermediaries like banks or payment platforms, blockchain enables direct transactions between the parties involved, eliminating the need to trust third parties. This not only streamlines processes but also reduces associated costs.
In the business world, blockchain has emerged as a key driver of innovation. It facilitates the creation of smart contracts, self-executing digital agreements that activate when predefined conditions are met. This reduces bureaucracy, optimizes operational efficiency, and provides companies with greater agility in their business processes.
Furthermore, blockchain offers greater transparency and traceability in supply chains. Companies can track each step of the production process, from raw materials to the final product. This not only ensures quality but also meets the growing consumer demand to know the origins of the products they purchase.
Despite its benefits, blockchain faces challenges such as scalability and widespread adoption. However, as the technology evolves, businesses and governments are exploring ways to integrate blockchain into their daily operations. The blockchain revolution is far from over. It is expected that the technology will continue to transform the way we interact, share information, and conduct transactions. As more industries adopt blockchain, we can anticipate an era of greater efficiency, transparency, and trust in innovation driven by this decentralized technology. Moreover, blockchain technology facilitates regulatory compliance by allowing businesses to maintain an immutable and verifiable record of their transactions. This is particularly useful in highly regulated industries, where companies must meet strict transparency and reporting requirements.
OUR PROYECTS
CEL.IA – Advanced AI Solutions for Human-Machine Interfaces
CEL.IA is a strategic research project in cooperation between several technological centers that aims to join efforts to develop a "Toolkit" or complete offer of solutions based on virtual and augmented reality, artificial vision and natural language processing, to facilitate the effective incorporation of Artificial Intelligence in human-machine interfaces.
Duration: 2021 - 2023
ALMATIC – Definition and development of algorithms for data analysis platform for MAnTenTenTing, Predictive Quality and health problems.
Research of a data analysis platform on which a Smart Data system will be deployed to process general purpose industry information and on which algorithms based on advanced AI techniques will be studied to identify potential problems in the studied sectors of interest.
Duration: September 2021 - September 2022
CIBERTRAZ – Cyber-Physical Systems for Maintenance Traceability
CIBERTRAZ, Cyber-Physical Systems for maintenance traceability are transforming how data integrity is maintained in service environments. The CIBERTRAZ project combines Blockchain, IoT, and Big Data to enhance data trust and optimize maintenance processes.In the...
Ready Twin – Innovative Digital Twins Solutions
Research on different technologies, techniques, tools, methodologies and knowledge aimed at developing innovative technological solutions for the generation and exploitation of Digital Twins.
Duration: July 2019 - June 2023
NeuroCPS4 Maintenance – Neuromorphic Anomaly Detector
NeuroCPS4Maintenance is a project that aims to develop and demonstrate a neuromorphic edge anomaly detector that is robust against conceptual drift, alerts to faults early and provides fast, real-time response for predictive maintenance applications in high-demand industrial scenarios (industrial press). This anomaly detector will be based on deep learning algorithms (LSTM) and implemented on system on chips (SoC).
Duration: March 2021 - 2022
CYBERSEC – IT Security in Emerging Technologies
The CYBERSEC project researches in various technologies, techniques, tools, methodologies and knowledge aimed at developing technological solutions for the securitization against cyber-attacks of highly critical connected environments, such as Industry 4.0, Smart Cities or Critical Infrastructures.
Duration: 2020 - 2024
Smart Contract Data – Optimizing Construction Contracts
The Smart Contract Data system revolutionizes contract management in the construction sector by leveraging BigData and Artificial Intelligence. This innovative project aims to enhance efficiency and competitiveness through personalized decision-support tools....
TELEBOT VR – Remote Robot Control via Virtual Reality
Telebot - VR aims to narrow the sensory distances an operator encounters when controlling a robot remotely through simulation and virtual reality interfaces, and to promote vision and control over robotic systems and joints.
Duration September 2020 - December 2022
Neuromorfico EG – Neuromorphic systems for processing in EdGe Computing
The project experiments on new neuromorphic systems for processing in Edge Computing, through concrete technological developments for general application in industrial environments, capital goods, and in cybersecurity, in Intrusion Detection Systems (IDS).
Duration: 2021 - 2023
Fandango – Digital Twin in Automotive Manufacturing
Fandango is aimed at improving operational efficiency in the automotive components sector by acting on the visibility of information in the supply chain, maximizing product quality and optimizing maintenance processes. It will employ the use of digital twins to detect the occurrence of problems earlier and predict outcomes more accurately than pure simulation models.
Project duration: 2018 - 2022