Blockchain enables mutually distrusting parties to share data in a trusted way, without requiring third-party authorities, and implement business logic in smart contracts. The evolution of blockchain has generated a strong and continuously growing interest from industry and academia in its adoption towards creating novel Information Systems. Blockchain's execution environment offers additional trust guarantees, enhancing auditing and verification activities. The distinctive nature of blockchain technology and its application in novel IS raise new challenges from different perspectives. From a conceptual perspective, important challenges revolve around requirements engineering, modeling, integration, governance, and the evolution of these systems. From a technical perspective, the development of blockchain-based IS raises challenges related to data sharing, data management, system optimization, and the adoption of novel on- and off-chain solutions. Addressing these challenges requires innovative research and solutions to strengthen the adoption of blockchain-based IS and their engineering. The B4ISE workshop welcomes conceptual, technical, application-oriented, and case-study contributions around these challenges.
The rapid advancements in understanding human conditions and diseases have led to the development of innovative systems and AI-oriented approaches dedicated to uncovering diagnosis and treatment. The exponential growth of information in this field presents significant research challenges in designing and developing data management pipelines, knowledge generation processes, and their applications. New methods for obtaining, processing, and sharing data and knowledge are critical in healthcare and life sciences. This workshop aims to provide a forum for Information Science, Bioinformatics, and Artificial Intelligence researchers currently facing life science related challenges. Participants will have the opportunity to share, discuss, and explore emerging approaches to support Search, Knowledge Extraction, Data Science, and Analytics, thus resulting in significant results for healthcare, precision medicine, biology, and genomics.
In the age of advanced information systems, data is crucial for deploying and evaluating systems, but collecting usable benchmarks is challenging due to privacy, scarcity, and legal concerns. Synthetic datasets have become valuable for machine learning tasks, supporting data-driven applications, AI, IoT, Digital Twins, and business process models. The GenSyn workshop aims to discuss generative AI and classical techniques for creating synthetic datasets for AI and non-AI information systems, presenting state-of-the-art tools and approaches.
Our workshop is a dedicated forum to encourage the exploration of how synthetic datasets can be integrated across diverse information system engineering (ISE) contexts, which is a developing field and a promising application area for AI, where many approaches are not mature enough yet for publication at the main track and are more exploratory.
The international workshop HybridAIMS explores the intersection of Hybrid Artificial Intelligence (AI) and Enterprise Modelling (EM). Hybrid AI represents an emerging research focus that combines two major AI paradigms: sub-symbolic AI (e.g., machine learning techniques such as neural networks, large language models, and generative AI) and symbolic AI (e.g., machine reasoning, knowledge-based systems, ontologies, knowledge graphs, and fuzzy logic). Meanwhile, Enterprise Modelling is a well-established discipline dedicated to the conceptual representation, design, implementation, and analysis of information systems within organisational contexts and domain specificity. The integration of methods from those fields of research offers significant potential for advancing the design and engineering of Hybrid AI-based information systems across diverse application areas.
The goal of this workshop is to stimulate research work about how Knowledge Graphs can add context and flexibility to information systems, enabling semantic enrichment and reasoning capabilities for their operation or engineering processes. Knowledge Graphs have been primarily investigated as engineered artifacts by themselves – from their underlying formalisms (e.g. description logics), enabling technologies (e.g. RDF, LPG) to their knowledge management, semantic enrichment and integration capabilities.
With this workshop we aim to shift focus from what Knowledge Graphs are or how they can be built towards how they can be relevant to Information Systems engineering. Research advances on the interplay between Knowledge Graphs, Machine Learning and Large Language Models for systems engineering purposes are of particular interest to our workshop, towards establishing novel knowledge flows and knowledge-based system architectures.
Process mining has emerged as a critical area in business process management, enabling organizations to discover, monitor, and improve real processes by extracting knowledge from event logs readily available in today's information systems. Traditional process mining techniques primarily rely on structured data from information systems. However, with the advent of advanced data collection technologies, there is an increasing availability of multimodal data sources such as videos, images, audio recordings, and textual documents that can provide rich insights into processes, especially manual and unstructured ones. The 1st International Workshop on Multimodal Process Mining (MMPM) aims to bring together researchers and practitioners to explore the integration of multimodal data in process mining.
In recent years, Large Language Models (LLMs) have emerged as a transformative technology, opening new opportunities across various fields, including Information Systems design. While LLMs excel in Natural Language Processing tasks, such as text translation and summarization, their potential in software architecture design and, in particular, in service-oriented solutions, is still underexplored.
This workshop aims to provide a forum for innovative proposals striving to integrate LLMs in the landscape of Service-Oriented Architectures and Systems. Of particular interest are the benefits delivered by the adoption of LLMs in improving the design of service-oriented architectural solutions (and their impact on efficiency, accuracy, and scalability) as well as their use in tasks such as service discovery and composition.
The concept of Digital Twin is becoming increasingly popular since it was introduced in the scope of the Smart Industry (Industry 4.0). A Digital Twin (DT) is a digital representation of a physical twin that is a real-world entity, system, or event. It mirrors a distinctive object, process, building, or human, regardless of whether that thing is tangible or non-tangible in the real world. The DT technology provides benefits such as real-time remote monitoring and control; greater efficiency and safety; predictive maintenance and scheduling; scenario and risk assessment; better intra- and inter-team synergy and collaborations; more efficient and informed decision support system; personalisation of products and services; and better documentation and communication. The ultimate purpose of Digital Twins is to improve decision-making for solving real-world problems, by using the digital model to create the information necessary for decision-making and subsequently applying the decisions in the real world. Nowadays, Digital Twins are not limited to industrial applications but are spreading to other areas as well, such as, for example, in the healthcare domain, in personalised medicine and clinical trials for drug development.
This workshop aims at getting a better understanding of the techniques that can be used to model and implement Digital Twins and their applications in different domains. In this workshop, we welcome contributions that aim at introducing formal definitions of Digital Twin, but also contributions that describe applications of Digital Twins in different domains. Contributions on tooling for Digital Twins are also welcome.
The Process Mining with Unstructured Data (PMUD) workshop aims to provide a forum for researchers and practitioners to present and discuss how unstructured data can support process mining tasks.
Traditional process mining techniques take structured data as input. However, many valuable insights can be hidden in unstructured data sources, such as emails, social media interactions, legal documents, images, or sensor data. Most state-of-the-art techniques ignore such data, thus missing valuable insights regarding the process. Furthermore, relying solely on structured data can lead to a rigid analysis framework, as structured data often adheres to predefined formats and categories.
Recently, a growing array of approaches to deal with various kinds of unstructured data has emerged in the literature. Among them, NLP techniques have attracted considerable interest thanks to recent breakthroughs such as Large Language Models. Examples include using NLP techniques to extract process models from textual documents or using LLMs to interact with users at runtime. Some studies also advocate using unstructured data to extract inter-case patterns, thus supporting process-level and object-centric approaches.
Despite the promising results, dealing with unstructured data remains one of the main challenges when applying process mining.
As organizations face stringent regulatory requirements, such as GDPR, SOX, AML, and ISO standards, compliance is increasingly vital to operational strategy, helping organizations to avoid penalties, protect reputations, and remain competitive.
The 1st Workshop on Compliance in the Era of Artificial Intelligence (CAI) explores the evolving role of compliance in business processes and information systems. In addition to traditional methods like control definition, risk assessment, compliance monitoring, and reporting, CAI focuses on how emerging technologies, particularly Artificial Intelligence (AI) and Large Language Models (LLMs), can transform compliance management. These tools can automate repetitive tasks, process large datasets, enhance risk detection, and ensure real-time alignment with regulations, offering a more efficient and scalable approach.
The objective of CAI is to introduce new concepts and techniques for managing compliance in sectors like process management, healthcare, and law. It also aims to address real-world challenges and explore how AI can improve decision-making and strengthen compliance processes across various domains.