Tuesday, December 31, 2024

Fractal Happy New Year 2025 from CIPARLABS!

 

 

Fractal Happy New Year 2025 from CIPARLABS!

As we step into 2025, CIPARLABS reflects on a year of exceptional multidisciplinary research spanning artificial intelligence and neural networks, energy systems, healthcare, and complex systems theory. This year we wish you a happy "fractal" 2025 to underline our scientific approach to the problems we are going to solve and which concerns the science of complexity.

The synergy between AI and complexity science has unlocked innovative solutions to societal challenges. From revolutionizing energy grids to enhancing medical diagnostics, our work exemplifies how common frameworks can empower diverse fields. This post celebrates our achievements, highlighting the unity of disciplines and the endless possibilities of a collaborative future.


2024 Highlights: Research Achievements

1. Transformative Advances in Energy Management

  • Battery Modeling for Renewable Energy Communities: A Thevenin-based equivalent circuit model optimized energy management strategies, balancing computational efficiency and accuracy in predicting battery performance.
  • Energy Load Forecasting Breakthrough: Novel integration of second-derivative features into machine learning models like LSTM and XGBoost significantly improved predictions for peak energy demands, enhancing microgrid stability.
  • Smart Grid Fault Detection: The Bilinear Logistic Regression Model enabled interpretable AI-driven fault detection, ensuring resilient energy infrastructures.

2. Innovations in Healthcare through Explainable AI

  • Melanoma Diagnosis: Developed a custom CNN with feature injection, utilizing Grad-CAM, LRP, and SHAP methodologies to interpret deep learning predictions. This workflow sets a benchmark for explainability in computer-aided diagnostics.
  • Text Classification in Healthcare Discussions: Conducted a comparative study of traditional and transformer-based models (BERT, GPT-4) to classify Italian-language healthcare-related social media discussions, combating misinformation effectively.

3. Exploring Human vs. Machine Intelligence

  • Using complex systems theory and Large Language Models, we analyzed GPT-2’s language generation dynamics versus human-authored content. The study revealed distinct statistical properties, such as recurrence and multifractality, informing applications like fake news detection and authorship verification.

Future Directions: Looking Ahead to 2025

CIPARLABS aims to deepen its focus on explainable AI for critical applications in energy, healthcare, and language modeling. We are committed to expanding our interdisciplinary efforts, incorporating insights from philosophy, complex systems, and AI ethics. Future work will include:

  • Integrating advanced multimodal AI systems in healthcare.
  • Scaling energy solutions to diverse legislative frameworks worldwide.
  • Further bridging AI and human cognition to enhance ethical and transparent AI systems.

List of Published Papers (2024)

An Online Hierarchical Energy Management System for Renewable Energy Communities
Submitted to: IEEE Transactions on Sustainable Energy

Improving Prediction Performances by Integrating Second Derivative in Microgrids Energy Load Forecasting
Published in: IEEE IJCNN 2024, IEEE

From Bag-of-Words to Transformers: A Comparative Study for Text Classification in Healthcare Discussions in Social Media
Published in: IEEE Transactions on Emerging Topics in Computational Intelligence

An Extended Battery Equivalent Circuit Model for an Energy Community Real-Time EMS
Published in: IEEE IJCNN 2024

Modeling Failures in Smart Grids by a Bilinear Logistic Regression Approach
Published in: Neural Networks, Elsevier

An XAI Approach to Melanoma Diagnosis: Explaining the Output of Convolutional Neural Networks with Feature Injection
Published in: Information, MDPI

Human Versus Machine Intelligence: Assessing Natural Language Generation Models Through Complex Systems Theory
Published in: IEEE Transactions on Pattern Analysis and Machine Intelligence, IEEE

Many other are in process!

Wednesday, December 18, 2024

Nuove Normative e Trend per le Rinnovabili e i Sistemi di Accumulo in Italia

 


La transizione energetica in Italia sta vivendo una fase fondamentale, con nuove normative e trend che stanno ridefinendo il panorama delle rinnovabili e dei sistemi di accumulo. Due sviluppi recenti meritano particolare attenzione: l'approvazione del Testo Unico sulle Rinnovabili e i dati aggiornati sul mercato dei sistemi di accumulo. Entrambi offrono una panoramica delle sfide e delle opportunità per il settore energetico nazionale.

Testo Unico sulle Rinnovabili: semplificazione e nuove regole

Approvato dal Consiglio dei Ministri, il Testo Unico sulle Rinnovabili (o Testo Unico FER) entrerà in vigore il 30 dicembre 2024. L'obiettivo principale è semplificare i complessi iter burocratici per la costruzione e la gestione di impianti rinnovabili, attraverso tre regimi amministrativi:

  1. Attività libera:

    • Esenzione da permessi e autorizzazioni per interventi che non interferiscono con beni tutelati o opere pubbliche.

    • Applicabile a impianti fotovoltaici fino a 12 MW (integrati) o 1 MW (a terra), turbine eoliche singole, impianti agrivoltaici fino a 5 MW e altre configurazioni specifiche.

    • Richiesta una cauzione per interventi su suoli non antropizzati.

  2. Procedura abilitativa semplificata (PAS):

    • Richiede una dichiarazione di disponibilità delle superfici, minimizzazione dell’impatto paesaggistico e polizza fideiussoria per i costi di ripristino.

    • Prevede oneri e compensazioni territoriali per impianti con potenza superiore a 1 MW.

    • Decadenza del titolo abilitativo in caso di mancato avvio o conclusione dei lavori entro i termini stabiliti.

  3. Autorizzazione unica (AU):

    • Competenza regionale per impianti fino a 300 MW; ministeriale per impianti offshore o >300 MW.

    • Include obblighi di ripristino e validità minima di 4 anni.

Zone di accelerazione: Entro maggio 2025 il GSE pubblicherà una mappatura delle aree disponibili per impianti rinnovabili. Regioni e Province Autonome adotteranno entro febbraio 2026 piani per semplificare ulteriormente gli iter autorizzativi.

Sistemi di Accumulo: flessione e opportunità

Il mercato italiano dei sistemi di accumulo sta vivendo dinamiche contrastanti. Dopo il boom legato al Superbonus, il segmento residenziale ha registrato un netto rallentamento, mentre il settore utility ha mostrato una crescita significativa.

Dati principali 2024

  • Segmento residenziale:

    • Calo del 25% nelle installazioni, -31% in potenza e -29% in capacità rispetto al 2023.

  • Settore commerciale e industriale (C&I):

    • Riduzione del 18% nelle installazioni, -29% in potenza e -11% in capacità rispetto all’anno precedente.

  • Scala utility:

    • Crescita esponenziale con +133% nelle installazioni, +532% in potenza e +2877% in capacità, trainata da progetti del capacity market e iniziative merchant non incentivati.

Criticità normative

  • La fine del Superbonus e le modifiche nelle detrazioni fiscali hanno inciso negativamente sul segmento residenziale.

  • Il Testo Unico Rinnovabili presenta incertezze sugli iter autorizzativi per i sistemi di accumulo, con possibili conflitti di competenza tra amministrazioni.

  • Anie Rinnovabili propone che le nuove norme si applichino solo ai progetti futuri e richiede armonizzazione normativa entro sei mesi.

Dati cumulati a settembre 2024

  • Sistemi di accumulo installati: 692.386 unità.

  • Potenza complessiva: 5.034 MW.

  • Capacità massima: 11.388 MWh.

Si può concludere che le rinnovabili e i sistemi di accumulo sono al centro della transizione energetica italiana. Mentre il Testo Unico sulle Rinnovabili promette di semplificare le procedure, il mercato dei sistemi di accumulo riflette le sfide legate alla normativa e alla fine di incentivi chiave. Tuttavia, i dati su scala utility dimostrano il potenziale di crescita del settore, segnalando opportunità per il futuro.

 

Fonte 1

Fonte 2

Thursday, December 5, 2024

An XAI Approach to Melanoma Diagnosis: Explaining the Output of Convolutional Neural Networks with Feature Injection

 

 https://www.mdpi.com/2078-2489/15/12/783

 

Explainable artificial intelligence (XAI) is becoming a cornerstone of modern AI applications, especially in sensitive fields like healthcare, where the need for transparency and reliability is paramount. Our latest research focuses on enhancing the interpretability of convolutional neural networks (CNNs) used for melanoma diagnosis, a field where accurate and trustworthy tools can significantly impact clinical practice.

Melanoma is one of the most aggressive forms of skin cancer, posing challenges in diagnosis due to its visual similarity to benign lesions. While deep learning models have demonstrated remarkable diagnostic accuracy, their adoption in clinical workflows has been hindered by their "black box" nature. Physicians need to understand why a model makes specific predictions, not only to trust the results but also to integrate these tools into their decision-making processes. In this context, our research introduces a novel workflow that combines state-of-the-art XAI techniques to provide both qualitative and quantitative insights into the decision-making process of CNNs. The uniqueness of our approach lies in the integration of additional handcrafted features, specifically Local Binary Pattern (LBP) texture features, into the CNN architecture. These features, combined with the automatically extracted data from the neural network, allow us to analyze and interpret the network's predictions more effectively.

The study leverages public datasets of dermoscopic images from the ISIC archive, carefully balancing training and validation datasets to ensure robust results. The modified CNN architecture features five convolutional layers followed by dense layers to reduce dimensionality, making the network’s internal processes more interpretable. Alongside dermoscopic images, the network is fed LBP features, which are injected into the flattened layer to augment the learning process.

To explain the model's predictions, we employed two key XAI techniques: Grad-CAM and Layer-wise Relevance Propagation (LRP). Grad-CAM generates activation maps that highlight regions of the image influencing the network’s decisions, while LRP goes further by assigning relevance scores to individual pixels. Together, these methods provide a visual explanation of the decision-making process, helping to identify which areas of an image the model considers most important for classification. Interestingly, we observed that LRP was particularly effective in distinguishing clinically relevant patterns, while Grad-CAM occasionally identified spurious correlations. For a quantitative perspective, we used the kernel SHAP method, grounded in game theory, to assess the importance of features in the network’s predictions. This analysis revealed that most of the classification power - approximately 76.6% - came from features learned by the network, while the remaining 23.4% was contributed by the handcrafted LBP features. Such insights not only validate the role of feature injection but also open avenues for integrating diagnostically meaningful features, such as lesion asymmetry or border irregularities, into future models.

The performance of our modified CNN surpassed both our earlier work and other state-of-the-art approaches, achieving an accuracy of 98.41% and an AUC of 98.00% on the external test set. These results underscore the effectiveness of our interpretability framework, proving that improving transparency does not necessarily compromise accuracy that can be enhanced.

While this research marks significant progress, it also highlights areas for future exploration. The use of handcrafted features with limited diagnostic value, such as LBP, points to the need for incorporating features more aligned with clinical evaluation, like the ABCDE rule used for melanoma assessment. Moreover, involving dermatologists in the evaluation process could provide valuable qualitative feedback to refine the interpretability methods further.

This work demonstrates that XAI is a tool for explaining AI decisions and a a critical component for building trust in AI systems, especially in high-stakes fields like medical diagnostics. By combining visual and quantitative explanations, we hope to bridge the gap between AI and clinical practice, paving the way for broader adoption of AI-assisted tools in healthcare. Through this transparent and interpretable approach, we aim to empower clinicians, enhance diagnostic accuracy, and ultimately improve patient outcomes.

Here the paper: https://www.mdpi.com/2078-2489/15/12/783







Powering AI Sustainably: The Promise of Analog Computing

 

Image source

The rapid rise of artificial intelligence, especially in applications like generative AI, has brought incredible breakthroughs, but it has also exposed a pressing issue: the energy cost of computing. Massive models like Llama 2-70B, capable of generating human-like text or answering complex queries, require enormous computational power. Behind every token of output lies a staggering number of calculations, each consuming energy. As these models grow in size and demand increases, so does the strain on power grids and the environment. The need for more energy-efficient AI computing is no longer just a technical challenge; it’s an environmental imperative.

For years, engineers and researchers have sought ways to reduce this burden, pushing the limits of traditional digital hardware. Digital systems are undeniably powerful, but they are inherently inefficient for certain tasks. Most of their energy is spent not on the actual calculations but on moving data back and forth between memory and processors. It’s like building a highway where the majority of fuel is burned idling in traffic rather than moving forward. This inefficiency has driven interest in alternative computing methods, with analog AI emerging as one of the most promising solutions.

 

Analog Computing

Analog computing, while not a new concept, has taken on renewed significance in the age of energy-hungry artificial intelligence. The growing demands of machine learning and generative AI have sparked a reevaluation of computational paradigms, and analog systems are emerging as a viable solution to the energy crisis. Unlike their digital counterparts, which operate through the rapid switching of billions of transistors to represent binary 1s and 0s, analog computing leverages the natural properties of physical systems to carry out computations. This fundamental difference offers remarkable advantages in both efficiency and scalability. In digital systems, every operation, no matter how simple, requires data to be shuttled between memory and processors. This constant movement is energy-intensive, often overshadowing the energy cost of the calculations themselves. Analog systems, on the other hand, sidestep this bottleneck by embedding computation directly into the physical properties of the system. Here, mathematical operations are not sequences of discrete steps but rather the inherent outcomes of physical interactions. For example, in analog AI, the essential operation of multiplying two values and summing the results, a cornerstone of neural network computations, can be performed almost effortlessly using electrical signals. Ohm’s Law, which relates voltage, current, and resistance, allows for multiplication when the "weights" of a neural network are encoded as electrical conductance values. Kirchhoff’s Current Law, which governs the summation of currents in a circuit, handles the addition. Together, these principles enable analog systems to execute complex operations in a single step, vastly reducing the time and energy required. This approach not only cuts down on computational latency but also eliminates the need for high-energy data movement. Since the weights and parameters of the neural network are physically embedded in the hardware, they remain stationary, and only the input signals change dynamically. The result is a system where energy consumption is minimized, not just by optimizing the operations themselves but also by reducing one of the largest contributors to inefficiency in digital systems: data transfer. The advantages don’t end there. Analog computing inherently uses continuous signals rather than discrete bits, enabling it to process information in a way that is both natural and precise for certain applications. For AI, this means operations like matrix multiplications, the backbone of deep learning, can be done more quickly and with far less power. This makes analog computing especially attractive for large-scale models where traditional digital systems struggle with inefficiency. However, the resurgence of analog computing isn’t merely about nostalgia for a bygone technology. It’s a forward-looking response to the pressing challenges of scaling AI in a sustainable manner. Modern advancements, such as the use of flash memory cells in analog chips, are breathing new life into the concept. These innovations have adapted analog techniques to fit the precision and scalability demands of contemporary AI, bridging the gap between an old idea and the needs of the future.

The Power of Analog Computing


This leap in efficiency comes at a critical time. The global appetite for AI continues to grow, with applications expanding from chatbots to autonomous systems, from scientific research to personalized healthcare. But meeting this demand with current digital architectures risks unsustainable energy consumption. Analog AI offers a way forward, promising not just incremental improvements but transformative gains. It holds the potential to make AI greener, cheaper, and more accessible, ensuring that the benefits of these technologies aren’t limited by their environmental cost. Of course, the path to realizing this vision isn’t without challenges. Analog systems must contend with issues like signal noise, variations in circuit behavior, and the need to translate their analog results into digital formats that other systems can use. But companies like Sageance are tackling these hurdles head-on, developing solutions that calibrate and stabilize the analog processes while retaining their efficiency. By addressing these technical barriers, analog AI is positioning itself as not just an alternative, but a necessity in the evolution of AI hardware.

The stakes couldn’t be higher. The choices we make now about how to build and power AI systems will shape their impact on the world for decades to come. Analog AI isn’t just about better chips or faster models; it’s about creating a sustainable foundation for the future of (artificial) intelligence itself.

More on: https://spectrum.ieee.org/analog-ai-2669898661

Monday, December 2, 2024

The Nostalgia of Java Applets for discovering science

 


In the mid-1990s, as the web was emerging as a revolutionary medium for communication and exploration, Java applets were among the first technologies to bring interactivity to the browser. Introduced by Sun Microsystems in 1995 as part of the Java platform, these small, embeddable programs transformed static web pages into dynamic and engaging experiences. For many, applets offered a first glimpse of what the web could become; a space not just for consuming information but for learning, experimenting, and playing.

Java applets were a marvel of their time. They could run seamlessly across different operating systems, requiring only a browser equipped with the Java Runtime Environment. This cross-platform compatibility was a game-changer, allowing developers to create interactive content that could reach users on Windows, macOS, or Linux without modification. For educational purposes, applets were particularly revolutionary. Teachers and students could explore scientific simulations where variables could be tweaked, and the results observed in real-time. Whether it was visualizing gravitational forces, plotting mathematical functions, or exploring chemical reactions, Java applets made complex concepts tangible and accessible.

Beyond education, applets also entertained and engaged. Small, playable games delivered a dose of fun directly through the browser, requiring no installation. From puzzle games to experimental applications, they captured the imagination of users and hinted at the future of interactive media. Applets even found use in data visualization, enabling users to manipulate charts and graphs dynamically, long before modern analytics tools became commonplace.

But while Java applets offered a glimpse of the web's potential, their limitations gradually became apparent. Security was perhaps their Achilles’ heel. Because applets required permissions to run on a user’s machine, they became a vector for malicious actors. Vulnerabilities in the Java Runtime Environment made systems susceptible to exploits, and over time, applets gained a reputation for being a security risk. Performance was another issue. Many applets were slow to load and resource-intensive, frustrating users with long waits and browser crashes. Moreover, browser support was inconsistent; some browsers implemented Java well, while others did not, leading to compatibility headaches for developers and end-users alike. As the internet evolved, so too did the technologies that powered it. By the early 2010s, the tide was turning against plugin-based solutions like Java applets. Browser vendors increasingly prioritized security and performance, moving away from plugins in favor of native web technologies. The introduction of HTML5, alongside JavaScript and CSS3, provided developers with powerful tools to create interactive and responsive applications directly in the browser. By 2017, Oracle officially ended public updates for the Java plugin, marking the end of the applet era.

Today, the legacy of Java applets lives on in technologies that address their shortcomings while retaining their spirit of interactivity and accessibility. JavaScript, for example, has become the cornerstone of modern web development. With libraries and frameworks like React and D3.js, developers can create immersive experiences that run efficiently in any browser. WebAssembly pushes this even further, enabling near-native performance for complex applications like scientific simulations and 3D rendering. These modern tools are not just replacements but evolutions, blending the best of what applets once offered with the capabilities of a more mature web. For those of us who remember the early days of the internet, Java applets evoke a particular kind of nostalgia. They were the building blocks of a more dynamic web, inspiring curiosity and creativity in equal measure. Whether it was tweaking variables in a physics simulation, playing a simple browser-based game, or exploring interactive charts, applets opened doors to experiences that felt magical at the time.

In tribute to that spirit, we present a collection of modern educational tools and scientific learning applications found surfing the web, rewritten in JavaScript. These new tools honor the legacy of Java applets while leveraging today’s more robust, secure, and efficient web technologies.

Explore the Collection Here

As we marvel at what the modern web has become, it’s worth taking a moment to look back at the technologies that paved the way. Java applets may be a relic of the past, but their influence is still felt in the interactive experiences we enjoy today.

The Future of Lithium-Ion Battery Diagnostics: Insights from Degradation Mechanisms and Differential Curve Modeling

  Featured Research paper: Degradation mechanisms and differential curve modeling for non-invasive diagnostics of lithium cells: An overview...