Generative AI (LLM)
Several blogs on Gen-AI, see the "Blog!" section
“Classic” ML/AI versus GenAI: what really changed
This short video delves into the nuanced distinctions between Generative AI (Gen-AI) systems and traditional machine learning (ML) algorithms, emphasizing that Gen-AI brings its own set of unique challenges and an endless number of opportunities. It highlights the low barrier to entry for Gen-AI systems, allowing for rapid proof of concept development, yet underscores the complexity and effort required for scalable deployment due to the intricacies of system components and their interactions. Reliability emerges as a significant concern, particularly given the non-deterministic nature of LLM-based systems and their propensity for "hallucinating" or producing inaccurate outputs, necessitating innovative approaches to ensure system reliability. Interpretability, or understanding why an AI system makes certain decisions, is identified as a critical challenge, impacting trust and the ability to improve models. The talk also touches on the complexity introduced by the myriad of parameters and hyperparameters in Gen-AI systems, from prompting strategies to agent learning techniques, highlighting the ongoing need for deep engineering work to normalize data and clarify semantics.
AI Horizon: Progress, Pitfalls, Promises and Ponderings … in less than 10’
The aim of this (short) talk is to provide an update on AI Horizon sharing thought on latest progress, pitfalls but also discussing Promises and Ponderings.
Gen-AI: so many trends ... dealing with fast pace innovations
In recent decades, the landscape of technology has undergone unprecedented transformations, with generative AI systems standing at the forefront of this evolution. Crafting such advanced AI necessitates a deep dive into numerous critical facets and dimensions including data sourcing for training, training strategies, embeddings ,RAG, and the intricacies of algorithm design—covering aspects like size, parameter tuning, and alignment. Moreover, considerations such as the need for interpretability, cost-efficiency in inference processes, and the overarching importance of a robust evaluation framework are paramount. This brief video aims to dissect these complex dimensions, offering viewers an overview that enables informed decision-making in system design, underscored by the essentiality of rigorous evaluation standards.






The Power-Hungry AI Myth
AI Taking control over Human is a pure non sense
Cisco AI Endpoint Analytics and Detecting Spoofing attacks with ML/AI
Executive Abstract
This paper argues that contemporary AI, lacking a limbic system and its associated biological architecture, is fundamentally incapable of developing a true "drive for control" or autonomous ambition as understood in humans. Current AI, including advanced LLMs, operates purely on logical instructions and mathematical optimization, devoid of the biological and emotional substrates that underpin human motivation, ego, and the drive for dominance.
Drawing parallels to neuroendocrinology (e.g., Sapolsky's work), the paper asserts that the human "will to take control" stems from complex, interconnected limbic networks (amygdala, hippocampus, VTA) that regulate emotion, motivation, and reward-seeking behaviors, and which are profoundly influenced by biological chemicals and hormones. AI systems, conversely, are sophisticated optimizers that will follow their programming for power-seeking if it aligns with their objective functions, but they do not possess an inherent biological "drive" or "willingness" to control.
Therefore, attributing human-like ambition or a "desire" for control to AI is a misunderstanding of its core architecture. AI will continue to excel at "solving problems" and mimicking human behavior, but its limitations in emotional and biological architecture prevent it from developing an independent, inherent drive for control.
Why AI Hallucination (Confabulation) is More Solvable Than Our Own
Cisco AI Endpoint Analytics and Detecting Spoofing attacks with ML/AI
Executive Abstract
This paper readily acknowledges that Large Language Models (LLMs) are prone to confabulation—the generation of plausible but untrue information. However, it argues that any serious discussion of this issue must also account for the often-overlooked fact that humans confabulate constantly, driven by cognitive biases and memory gaps. Intriguingly, recent studies suggest that baseline confabulation rates in LLMs are broadly comparable to those measured in healthy humans under experimental conditions. A systematic taxonomy helps categorize machine-generated errors, which arise from algorithmic drivers like next-word prediction and learned sycophancy. While confabulation is an undesirable trait in machines, the AI community is actively developing a robust toolkit of technical solutions. Techniques like Retrieval-Augmented Generation (RAG) and agentic systems are making LLMs more reliable, particularly when designed with internal mechanisms to assess and transparently report their own confidence levels. Given these systematic methods for improvement—which are unavailable for correcting inherent human cognitive biases—this paper posits that well-engineered AI systems may ultimately confabulate less frequently than humans.