DIGITAL TRANSFORMATION

Digital Transformation uses technologies such as cloud computing and artificial intelligence to redesign business processes across various sectors, rapidly changing everyday and industrial life while requiring new solutions and increased cybersecurity.

The concept of Digital Transformation refers to the processes of change at the level of companies and organizations through the specific use of digital technologies to redesign their own value creation processes. These technologies include quantum computing, cloud computing or edge computing, digital platforms, the Internet of Things, blockchain, artificial intelligence, virtual reality, and more.

This transformation affects numerous sectors of activity, from logistics to energy, agri-food, telecommunications, financial services, manufacturing, healthcare, and education, among others. The evolution and increasing adoption of these technologies are rapidly changing the daily lives of people and society, as well as aspects related to industry.

Digital Transformation not only enables the creation of new products and services but also requires new responses and technological solutions: topics such as smart and communication networks, robust data infrastructure, and the highest possible cybersecurity in the economy.

COSTUMERS IN DIGITAL TRANSFORMATION

TECHNOLOGIES APPLIED IN DIGITAL TRANSFORMATION

Electronic product development.

Digital technology is transforming not only how companies manufacture products but also the products themselves. Today, it is clear that we have entered a new era of products—an era of smart and connected products. These are products equipped with embedded sensor capabilities, software, connectivity, and even artificial intelligence. This era is likely to offer extraordinary opportunities for businesses and their customers.

However, so far, only a few companies are fully leveraging these opportunities. Most continue to adhere to conventional methods of manufacturing, distributing, and using products, missing out on the vast potential that new digital technologies can unlock.

Manufacturers of products such as automobiles, devices, heavy and industrial machinery, and software will design solutions to deliver highly personalized experiences through digital and other services that can be tailored to customers’ needs. But this requires manufacturers to prepare for and embrace change, reinventing their products to enhance their value and grow their businesses.

This represents a significant step toward digitalization and a shift in how companies think about, design, manufacture, and sell products. Until now, the focus of digitalization in industries has primarily been on optimizing and increasing the efficiency of front- and back-office operations in areas such as marketing and sales, planning, procurement, finance, human resources, and IT. However, this focus is evolving.

The change is driven by technologies like the Internet of Things (IoT), sensors, edge computing, and cloud computing, which enable engineers to integrate computing power, connectivity, and software into almost any hardware product. This, in turn, supports the development of hyper-personalized product experiences and offerings such as “as-a-service” solutions—all built on digital technologies.

5G Network Applications

The 5G wireless communication standard will be a game changer, not only in the consumer sector but also in the industrial sector. Industrial customers are already benefiting—or soon will—from fifth-generation mobile communications. 5G technology has the potential to transform the industrial landscape due to the following advantages:

  • Low latency: Latency in a 5G network is less than 5 milliseconds (ms), compared to 15 to 80 ms for 4G (LTE).
  • High data speeds: With 5G, data transfer rates can reach up to 10 Gbps (1.25 gigabytes per second). By contrast, average internet access in Germany currently allows data transfer rates of 50 to 100 Mbps (6.25 to 12.5 MB per second).
  • Extreme reliability: The failure rates for 5G are very low, with reliability estimated at up to 99.999%.
  • Energy efficiency: 5G network batteries are expected to last up to 10 years.
  • Support for many devices: Up to one million devices can operate within a single square kilometer.
  • High precision: Both stationary and moving objects can be located with an accuracy of less than 10 centimeters.

The new 5G mobile communication standard forms the foundation for digitalization and the full interconnection of all areas of life, particularly in industry. While the benefits of 5G for consumers are often discussed, the applications that stand to gain the most are in IoT and Industry 5.0. Specifically, areas such as production automation, optimized control of robots and machines, smart maintenance, and the overall interconnection and management of production, facilities, warehouses, and logistics will see significant advantages.

Moreover, 5G will make autonomous driving a reality. A prerequisite for this is real-time direct communication between vehicles and the high availability of mobile data connections. With 5G, not only are vehicles themselves intelligently interconnected, but so are all road users (V2X/Vehicle-to-Everything). 5G mobile connections work reliably in moving vehicles, even at speeds of up to 500 km/h.

In urban areas, the high communication density enabled by 5G is particularly remarkable. Up to one million devices can send and receive data simultaneously within a square kilometer. This benefits densely populated urban areas as well as large events, where tens of thousands of users need to be served simultaneously in a limited space.

Immersive Solutions and Generative AI

Chatbots have become an essential tool in the digital era, enhancing communication and efficiency across various sectors. These artificial intelligence (AI) systems automate user interactions through text-based chat interfaces, simplifying access to information and services. In this context, GPT-4 models are gaining prominence, marking the first time in history that a chatbot can hold a conversation, understand users, and generate coherent texts. Adding immersive environments to these capabilities takes the experience to the next level.

Chatbots rely on a combination of AI technologies and natural language processing (NLP) algorithms that enable smooth communication and task execution based on user requests. Their utility becomes evident when integrated with applications and services such as websites, instant messaging apps, or customer support systems.

These chatbots can also include Speech-to-Text (STT) modules, which convert audio input from users into text, and Text-to-Speech (TTS) modules, which synthesize voice responses. In this setup, chatbots generate text-based replies, which are then converted into audio, providing spoken responses to users.

This technology is evident in virtual assistants like Alexa or Siri. But advancements have pushed boundaries further. Recently, many services have emerged that create hyper-realistic digital avatars or personas with lipsync modules, synchronizing the avatar’s lip movements with generated audio. By giving chatbots a face, the experience becomes even more natural and human-like.

By integrating these digital avatars or characters into platforms like Unity or Unreal Engine, developers can create applications that incorporate 3D models, spatial sound, and advanced interaction systems. These technologies enable virtual beings to interact with users in more realistic and natural ways. Additionally, these platforms ensure compatibility across devices such as mobile phones, PCs, and AR/VR systems, including Meta Quest, HTC Vive, ARCore/ARKit-compatible mobile devices, and mixed-reality devices like Hololens.

Although these technologies were already available, the arrival of ChatGPT and GPT-4, OpenAI’s advanced language model, has significantly transformed the capabilities and applications of chatbots. The use of GPT-4 in creating virtual beings enables the generation of contextual dialogues and responses, resulting in more human-like and realistic interactions. Moreover, GPT-4’s ability to autonomously learn and adapt to specific situations allows these virtual beings to enhance their performance over time and tailor interactions to individual users.

With these technologies, we can develop various virtual agents for a wide range of roles. Some are familiar roles that will see substantial improvements with GPT-4, while others are entirely new. For instance, virtual assistants can now be deployed in industries such as manufacturing, healthcare, tourism, hospitality, and more, enabling unprecedented levels of personalized and efficient service.

Artificial Intelligence

Artificial Intelligence (AI) has become a common term in our lives. It is a technology that aims to develop knowledge enabling systems to perform functions that were once exclusive to humans, differing from past systems that relied solely on predefined or programmed decisions. The primary goal of AI is to provide algorithms capable of executing functions that only humans could previously perform.

On one side, there are those who develop algorithms to address specific problems, and on the other, those who use third-party algorithms and must thoroughly understand their development and functionality to determine their appropriate use: the analysts. Data analysts utilize algorithms designed by others to classify, optimize, predict, detect patterns, select attributes, and more. Meanwhile, algorithm developers focus on researching new methods to achieve better adjustments, higher accuracy rates, and improved outcomes.

Projects like OpenAI are private, non-profit organizations that create free and open AI projects for everyone, with a mission to ensure that AI benefits all of humanity. Current examples of AI tools making headlines and spanning various sectors include:

  • DALLE-2: An AI capable of creating realistic images and artwork from natural language descriptions.
  • GPT (Generative Pretrained Transformer): An AI that generates human-like content using natural language.
  • Murf: An AI that converts text into audio for voice generation.
  • AIVA: An AI that composes soundtrack music.
  • Midjourney: An AI for creating images from textual descriptions.
  • Whisper: An AI for audio-to-text transcription, supporting multiple languages and including translation to English.
  • Stable Diffusion: An AI that generates photorealistic images from text inputs.
  • NeRF: An AI for rendering 3D images from 2D photos.
  • D-ID: An AI for creating talking avatars.

The use of these tools offers multiple advantages, including process automation, greater accuracy, reduced human errors, and cost and time optimization. However, it is crucial to remain critical and understand the challenges encountered during their implementation and development.

Predictive maintenance using AI techniques has highlighted the critical role of time series data in identifying the best solutions to address damages or threats. Prediction, classification, diagnosis, and the activation of corrective measures to prevent significant damage are central to this process. Until a few years ago, recurrent neural networks (RNNs) were commonly used to solve these problems. However, the emergence of Long Short-Term Memory (LSTM) networks is gaining prominence due to their superior long-term learning capabilities.

Digital Twins

The Digital Twin is an exact replica of a physical system throughout its entire lifecycle, from initial conception and design through implementation and operation to eventual evolution. Digital twins utilize data collected from various equipment and sensors, both in real-time and historically, with the goal not only of monitoring the physical system they represent but also of simulating, analyzing, and efficiently predicting its behavior and performance. Advances in task automation and the use of digital twins for decision-making are among the most promising technological tools in the industry today. In this context and others, it is essential to develop knowledge for implementing digital twins that represent real systems, enabling simulation of their behavior and facilitating informed decision-making.

Some of the advantages offered by digital twins include:

  • Improved performance: They provide real-time insights to optimize the performance of equipment and facilities, addressing issues as they arise and minimizing downtime.
  • Predictive capabilities: Digital twins offer a comprehensive view of facilities, identifying issues or failures as they occur and enabling preventive actions before complete failures happen.
  • Faster production timelines: By creating digital replicas, scenarios can be tested to anticipate and resolve problems before actual production, improving production processes.
  • Remote monitoring: Their virtual nature facilitates online supervision and remote control of facilities, reducing the need for physical presence in potentially hazardous environments.
  • Traceability of processes, resources, and material flow: Full control over the plant’s operations.

Digital twin technology is rapidly advancing, leveraging other enabling technologies such as artificial intelligence, the Internet of Things (IoT), and machine learning, which accelerate the potential of digital twins to transform the industry.

IoT Applications, Architecture, and Data Management

When discussing IoT applications, we think of a set of devices and technologies that work together to collect and process information from one or more data sources. Sensors, actuators, gateways, storage and computing servers, wireless communications, information exchange formats, etc., all form part of the IoT application architecture but are distributed across different levels or layers. There is no global consensus on the architecture for IoT, although there are various proposals. For simplicity, at ITCL, we consider a basic architecture made up of 3 layers.

Perception Layer: This layer is responsible for data collection using sensors that perform various types of measurements. The collected data is then sent to the network layer. This layer is the most vulnerable to physical attacks, primarily targeting the hardware of IoT devices. These attacks may include, for example, the destruction of the IoT device or its manipulation to alter the information it collects. The elements of this layer may also fall victim to radio interference attacks, which could cause a loss of connectivity for the devices, thus preventing them from communicating within the network. This layer is also susceptible to other types of attacks, such as those affecting the process of sending information gathered by the devices, through modifications to routing tables or altering and stealing information in transit.

Network Layer: This layer handles the connection and data transport between the elements of the other layers, such as the devices in the perception layer or the servers in the application layer. In IoT networks, which are used in distributed applications that may have a large number of connected devices, it is crucial to implement identity authentication and access control in a secure, efficient, and real-time manner. Otherwise, the network could be vulnerable to attacks involving fraudulent nodes or replicas being deployed onto it. The use of wireless technologies introduces potential risks, making it necessary to understand their vulnerabilities and security configurations in depth. Additionally, an attack on a specific node could escalate and compromise or influence the performance of many nodes across the network.

Application Layer: This layer is responsible for storing and processing the collected information. Depending on the application, the amount of information can vary, but some applications require the storage and processing of vast amounts of data, coming from multiple data sources and of various types. Therefore, special attention must be paid to the efficient management of this information with the available computing resources, without neglecting the use of backups or disaster recovery mechanisms to mitigate the impact if a disaster were to occur. Furthermore, there will be different users within the system, and each should only be able to access the information they need, to prevent, for example, theft or destruction of data. Access to the information must be controlled by defining who, how, and when they can access it. It is also essential to ensure the integrity of the data being processed to avoid erroneous results.

In summary, when analyzing the security of an IoT system from the perspective of its layered architecture, different risks can be identified for each layer. It is necessary to thoroughly understand the security requirements associated with each element that makes up an IoT-based application. Based on these requirements, it will be easier to determine which countermeasures should be applied to neutralize, or at least minimize, the risks to which these elements are exposed. Without these security measures, any attacker could jeopardize the infrastructure and information of the service and compromise its continuity.

Generative AI and Cybersecurity

Generative AI tools use our conversations to improve and train their models. Often, users are unaware of this, and our privacy can be compromised if the tools are not used responsibly. The tool may learn from our personal data and use it to generate responses for other users, leading to the potential leakage of private information.

Therefore, it is important to deactivate model training improvements. It will be necessary to look for the option to do so in the settings of the tools we use. For example, the following image shows how we can do this in ChatGPT.

Even if we deactivate model training, it is crucial to limit the information we provide to generative AI tools. We should provide only the minimum necessary information to make our inquiry, avoiding revealing additional or unnecessary details.

As always, special attention must be given to confidential or sensitive information, such as personal data. We should avoid entering this type of data into the tool to prevent potential future leaks.

It is also essential that employees are aware of other dangers associated with generative AI tools. For instance, malicious browser extensions might appear as a way to facilitate the use of ChatGPT, or fake websites or apps may impersonate ChatGPT. Their aim is to infect our devices or collect our passwords.

Finally, it is important to recognize the misuse of generative AI. The creation of false content, identity impersonation (e.g., with deepfakes), or the manipulation of opinions are some of the abuses of this technology.

In conclusion, generative AI tools can have a significant impact on the productivity of individuals and organizations. However, it is essential to be aware of the risks associated with irresponsible use of the technology and to understand the measures we can take to minimize these risks.

Quantum computing

At the frontier of the computing world lies an emerging paradigm known as quantum computing. Quantum Theory has enabled some of the greatest technological advancements of the past century, from the creation of lasers and medical Magnetic Resonance Imaging (MRI) to the entire semiconductor physics that underpins modern electronics. Additionally, Quantum Theory is now enabling the development of quantum computing.

Quantum computing represents a new way of performing calculations using the rules of quantum physics. Instead of using traditional bits like classical computers, which can be either 0 or 1, it uses quantum bits or qubits, which can be 0, 1, or both at the same time. This allows for solving extremely complex problems much faster than current computers can. Imagine a supercomputer that can test all possible solutions to a problem at once—that’s what quantum computing does.

This type of computing has the potential to revolutionize many sectors. It helps optimize operations that classical computing cannot handle. In the chemical industry, for example, it can speed up the design of new materials and medicines. In logistics, it optimizes delivery routes more efficiently. In energy, it improves the distribution and design of materials for solar energy. This is just the beginning; quantum computing promises to solve complex problems across various fields. Its advancement is creating a new paradigm in information processing, with immense potential to transform industries.

Immersive and collaborative robotics

Cutting-edge technologies in smart manufacturing have presented promising opportunities for utilizing collaborative teleoperation between humans and robots in customized manufacturing tasks. To effectively harness the creative capabilities of humans while benefiting from the efficiency and stability of robots, it is crucial to provide an intuitive teleoperation interface. However, current teleoperation systems still face limitations in terms of intuitive operability. An immersive virtual reality (VR)-based teleoperation system offers operators a more intuitive platform for controlling robots, thereby facilitating customized manufacturing processes.

Significant advancements in VR technology have led to substantial improvements in the performance and usability of virtual reality equipment and its accessories. By leveraging these features, VR devices can be integrated into teleoperation systems to reduce operators’ learning costs and provide a more immersive teleoperation experience.

Collaborative robots (cobots) are currently highly valuable allies in the industry. For the partnership between humans and these robots to function effectively, these technological entities must be programmed to work efficiently. Typically, cobots are equipped with various types of sensors to facilitate their work. In this way, to ensure the safety of human workers, they are equipped with tools that allow them to detect when a person is nearby, ensuring they do not harm the worker.

Moreover, the software that powers these cobots enables machine learning so that they can identify and understand the work environment in which they operate. It is in the design of this software for programming cobots where developers must ensure that the interactions between these robots and the human environment are neither problematic nor risky for the human worker. Currently, cobots can be found inside cages, like traditional robots, but there are also cobots that can move from one station to another. However, a common feature among them all is the safety of their interaction with humans.

Despite these technological advancements, which have enabled both large and medium-sized companies, and even some small ones, to incorporate these cobots into their production lines, much work remains to be done. This work will focus on refining these new tools to improve their effectiveness and the relationship established between these robots and people.

Neuromorphic engineering

Neuromorphic engineering and the synthesis of software algorithms into hardware are propelling artificial intelligence to a new level of efficiency and speed. In some cases, this approach achieves processing speeds over a thousand times faster, providing companies with a significant competitive edge in applications related to high-risk systems and data processing where latency is critical.

The scientific community is currently revisiting past concepts and advocating for solutions in edge computing, which involves direct processing on the machine, sensor, or device itself. This represents an evolution of cyber-physical systems, where computation is designed to occur within the system rather than on a separate platform. Additionally, FPGA systems (programmable devices with logic blocks), which experienced a peak in popularity before being overshadowed by microprocessors, ARM systems (electronics with computational capabilities and reduced instruction sets), or DSPs (digital signal processors), are regaining prominence.

New models are being developed based on neural networks, such as SNNs (spiking neural networks), which constitute the third generation of artificial neural networks. These networks are inspired by the actual behavior of the brain and enable continuous learning. Beyond classification and behavioral prediction, SNNs introduce discretization in inputs and plasticity in parameter adjustments.

The integration of hardware and software systems into blocks synthesized into ASICs or directly within FPGAs will facilitate the development of true neuromorphic systems. These systems, based on analog circuits, are designed to emulate the neurobiological structures of the nervous system.

Blockchain

In today’s digital age, technology has radically transformed the way we interact with the world around us. One of the most revolutionary advancements to emerge in the last decade is blockchain. This concept, initially linked to cryptocurrencies like Bitcoin, has evolved into a key catalyst for innovation across various sectors.

Blockchain is a decentralized and distributed ledger that allows the creation of an interconnected chain of blocks, each storing information securely and transparently. Unlike traditional databases, where information is stored in a single location, blockchain decentralizes the data, making it resistant to manipulation and cyberattacks.

One of blockchain’s greatest contributions is its ability to redefine trust in transactions. Instead of relying on intermediaries like banks or payment platforms, blockchain enables direct transactions between the parties involved, eliminating the need to trust third parties. This not only streamlines processes but also reduces associated costs.

In the business world, blockchain has emerged as a key driver of innovation. It facilitates the creation of smart contracts, self-executing digital agreements that activate when predefined conditions are met. This reduces bureaucracy, optimizes operational efficiency, and provides companies with greater agility in their business processes.

Furthermore, blockchain offers greater transparency and traceability in supply chains. Companies can track each step of the production process, from raw materials to the final product. This not only ensures quality but also meets the growing consumer demand to know the origins of the products they purchase.

Despite its benefits, blockchain faces challenges such as scalability and widespread adoption. However, as the technology evolves, businesses and governments are exploring ways to integrate blockchain into their daily operations. The blockchain revolution is far from over. It is expected that the technology will continue to transform the way we interact, share information, and conduct transactions. As more industries adopt blockchain, we can anticipate an era of greater efficiency, transparency, and trust in innovation driven by this decentralized technology. Moreover, blockchain technology facilitates regulatory compliance by allowing businesses to maintain an immutable and verifiable record of their transactions. This is particularly useful in highly regulated industries, where companies must meet strict transparency and reporting requirements.

OUR PROYECTS 

Productio – Boosting Industrial Efficiency with Industry 4.0

Research on various technologies, techniques, tools, methodologies and knowledge aimed at increasing the operational capacity of industrial processes (Overall Equipment Efficiency - OEE) within the framework of the connected industry. The project has enabled the adoption of productive and maintenance solutions in the connected industry by implementing digital security.

Duration: 2016-2020

tecnologias habilitadoras

Ciberfactory – Enhancing Industry 4.0 with Cyber-Physical Systems

The Technological Enablers project deals with the industrial research of enabling technologies to increase the technological capacity of the ITCL, and its competitiveness in the technological sector that will facilitate us to bring the experiences closer to the regional industrial interests.

Duration: 2019 - 2020

Ciberfactory

SV3D – Enhance 3D Security with Video Surveillance Systems

Development of an online 3D content generation system based on videogrammetry and focused on the security sector, which automatically models objects and environments in three dimensions with the only resource of a video recording, and allows modeling both static and dynamic objects and spaces through recordings.

Duration: 2018 - 2021

PigAdvisor – Smart Farming Solutions for Pig Farms

Modern animal production involves the generation of a massive amount of data or Big Data that requires more complex and comprehensive management systems to optimize the use of the data. This high volume of data requires new large-scale storage techniques and different approaches to retrieve the information.

Duration: 2018 - 2020

Ciber4gr0 – Cybersecurity fot Agro-Food Industry with Blockchain

The objective of the Cyber4gr.0 project is to conduct a technical feasibility study to analyze the application of the Cybersecurity Seal in the field of Industry 4.0 in general and in the Agri-Food industry in particular, as well as the feasibility study of granting the Seal in digital mode using BlockChain technology to ensure its free consultation, as well as its inviolability and immutability.

Duration: 2018 - 2019