toad.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Mastodon server operated by David Troy, a tech pioneer and investigative journalist addressing threats to democracy. Thoughtful participation and discussion welcome.

Administered by:

Server stats:

218
active users

#interpretability

0 posts0 participants0 posts today

Are LM more than their behavior? 🤔

Join our Conference on Language Modeling (COLM) workshop and explore the interplay between what LMs answer and what happens internally ✨

See you in Montréal 🍁

CfP: shorturl.at/sBomu
Page: shorturl.at/FT3fX
Reviewer Nomination: shorturl.at/Jg1BP

Unlock the Secrets of AI Learning! ????Ever wondered how generative AI, the powerhouse behind stunning images and sophisticated text, truly learns? Park et al.'s groundbreaking study, ‘Emergence of Hidden Capabilities: Exploring Learning Dynamics in Concept Space,’ offers a revolutionary new perspective. Forget black boxes – this research unveils a "concept space" where AI learning becomes a visible journey!By casting ideas into geometric space, the authors bring to life how AI models learn step by step, stripping bare the order and timing of their knowledge. See the crucial role played by the "concept signal" in predicting what a model is first going to learn and note the fascinating "trajectory turns" revealing the sudden "aha!" moments of emergent abilities.This is not a theoretical abstraction – the framework has deep implications in the real world:Supercharge AI Training: Optimise training data to speed learning and improve efficiency.Demystify New Behaviours: Understand and even manage unforeseen strengths of state-of-the-art AI.Debug at Scale: Gain unprecedented insights into the knowledge state of a model to identify and fix faults.Future-Proof AI: This mode-agnostic feature primes the understanding of learning in other AI systems.This study is a must-read for all who care about the future of AI, from scientists and engineers to tech geeks and business executives. It's not only what AI can accomplish, but how it comes to do so.Interested in immersing yourself in the captivating universe of AI learning?Click here to read the complete article and discover the secrets of the concept space! #AI #MachineLearning #GenerativeAI #DeepLearning #Research #Innovation #ConceptSpace #EmergentCapabilities #AIDevelopment #Tech #ArtificialIntelligence #DataScience #FutureofAI #Interpretability

Continued thread

@datadon

"The following sections discuss several state-of-the-art interpretable and explainable #ML methods. The selection of works does not comprise an exhaustive survey of the literature. Instead, it is meant to illustrate the commonest properties and inductive biases behind interpretable models and [black-box] explanation methods using concrete instances."
wires.onlinelibrary.wiley.com/ 🧵

The Illusion of Understanding: MIT Unmasks the Myth of AI’s Formal Specifications
A study by MIT Lincoln Laboratory suggests that formal specifications, despite their mathematical precision, are not necessarily interpretable to humans. Participants struggled to validate AI behaviors using these specifications, indicating a discrepancy between theoretical claims and practical understanding. The findings highlight the need for more realistic assessments of AI interpretability.
scitechdaily.com/the-illusion- #AI #FormalSpecifications #interpretability #behavior

SciTechDaily · The Illusion of Understanding: MIT Unmasks the Myth of AI’s Formal SpecificationsSome researchers see formal specifications as a way for autonomous systems to "explain themselves" to humans. But a new study finds that we aren't understanding. As autonomous systems and artificial intelligence become increasingly common in daily life, new methods are emerging to help humans che

#XAI - This book is a great resource on #explainability / #interpretability methods for #AI and #ML:

"Machine learning has great potential for improving products, processes and research. But computers usually do not explain their predictions which is a barrier to the adoption of machine learning. This book is about making machine learning models and their decisions interpretable.

After exploring the concepts of interpretability, you will learn about simple, interpretable models such as decision trees, decision rules and linear regression. The focus of the book is on model-agnostic methods for interpreting black box models such as feature importance and accumulated local effects, and explaining individual predictions with Shapley values and LIME. In addition, the book presents methods specific to deep neural networks.

All interpretation methods are explained in depth and discussed critically. How do they work under the hood? What are their strengths and weaknesses? How can their outputs be interpreted? This book will enable you to select and correctly apply the interpretation method that is most suitable for your machine learning project. Reading the book is recommended for machine learning practitioners, data scientists, statisticians, and anyone else interested in making machine learning models interpretable.

christophm.github.io/interpret

christophm.github.ioInterpretable Machine LearningMachine learning algorithms usually operate as black boxes and it is unclear how they derived a certain decision. This book is a guide for practitioners to make machine learning decisions interpretable.
Continued thread

There are a four invited speakers, but I am only personally familiar with three (Cyril Allauzen @ Google, Will Merrill @ NYU/Google, and Dana Angluin @ Yale).

These three talks should be fantastic, especially if you are interested in #automata, #FormalLanguages, and #Interpretability in neural language models!

(Plugging flann.super.site/ if those sound cool to you)

FLaNN SeminarsFLaNN SeminarsWe organize a series of weekly online seminars on Formal Language Theory, Natural Language Processing, Machine Learning and Computational Linguistics in an informal setting.