top of page

About

The Machine Learning for Health and Well-Being (MLwell) Lab is a research lab at the Bio-Medical Engineering department at Tel-Aviv University. Our vision is to create the technology to allow everyone and everywhere access to personalized medicine and precision psychology that is: (i) effective (ii) respects the biological, cultural and behavioral differences between people (iii) respects privacy and other ethical requirements (iv) affordable. Our mission is to improve the state in the art in machine learning algorithms for personalized medicine and precision psychology.

Our News

10.11.24

Upcoming Event

Context-Aware Automated Quality Evaluation of Structured Health Records" will be presented at IDSAI2025 on January 7th, 2025

Omri%20Armstrong_edited.jpg

In this work, we address the challenge of ensuring data quality in Electronic Health Records (EHRs), where the focus often shifts to model development rather than the underlying data itself. To fill this gap, we introduce the Medical Data Pecking Tool (MDPT), an innovative solution that utilizes unit-testing techniques to evaluate EHR data quality and its suitability for specific research questions. By combining a dataframe testing tool with a Large Language Model (LLM), MDPT can automatically generate and execute customized evaluations based on predefined criteria such as population traits and regional health patterns, ensuring that the data aligns with expected patterns for various diseases and geographic regions.

10.11.24

New Paper

The Intelligible and Effective Graph Neural Additive Networks

Omri%20Armstrong_edited.jpg

The Graph Neural Additive Network (GNAN) is the first interpretable-by-design graph neural network. It extends Generalized Additive Models (GAMs) to graph data, offering both high performance and transparency. As a result, GNAN is well-suited for high-stakes applications

10.11.24

New Paper

Lost in Translation: The Limits of Explainability in AI

Omri%20Armstrong_edited.jpg

This paper examines whether eXplainable AI (XAI) tools can effectively support the legal "right to explanation" by analyzing explanation's role across different stakeholders - decision subjects, decision makers, and the broader ecosystem. While XAI proves effective in strengthening system authority from an ecosystem perspective, it falls short in serving both decision subjects' and makers' needs, potentially making it an inadequate and possibly harmful tool for protecting human rights rather than the guardian it was intended to be.

12.9.24

xgbGAMView

Generalized Additive Models (GAMs) based on xgboost with smoothing and scikit-learn compatible interface

Omri%20Armstrong_edited.jpg

The xgbGAMView allows learning GAMs with a familiar scikit-learn interface. The GAMs use xgboost as the underlying engine to learn the model and offer visualization as well as graph smoothing options. The library can be installed from PyPi (pip install xgbGAMView).

15.4.24

New Paper

Impact of Long-COVID in children: a large cohort study

Omri%20Armstrong_edited.jpg

The impact of long-term Coronavirus disease 2019 (COVID-19) on the pediatric population is still not well understood. This study was designed to estimate the magnitude of COVID-19 long-term morbidity 3–6 months after the date of diagnosis.

24.3.24

New Paper

TREE-G: Decision Trees Contesting Graph Neural Networks

Omri%20Armstrong_edited.jpg

When dealing with tabular data, models based on decision trees are a popular choice due to their high accuracy on these data types, their ease of application, and explainability properties. However, when it comes to graph-structured data, it is not clear how to apply them effectively, in a way that in- corporates the topological information with the tabular data available on the vertices of the graph. To address this challenge, we introduce TREE-G. TREE-G modifies standard decision trees, by introducing a novel split function that is specialized for graph data. Not only does this split function incorporate the node features and the topological information, but it also uses a novel pointer mechanism that allows split nodes to use information computed in previous splits. Therefore, the split function adapts to the predictive task and the graph at hand. We analyze the theoretical properties of TREE-G and demonstrate its benefits empirically on multiple graph and vertex prediction benchmarks. In these experiments, TREE-G consistently outperforms other tree-based models and often outperforms other graph-learning algorithms such as Graph Neural Networks (GNNs) and Graph Kernels, sometimes by large margins. Moreover, TREE-Gs models and their predic tions can be explained and visualized.

14.3.24

Announcement

Congratulations to Belle Kriger, Hagar Rosenblatt, Chaya Ben-Yehuda, and Yarin Udi for completing their Master's degrees

Omri%20Armstrong_edited.jpg

Congratulations to Belle Kriger, Hagar Rosenblatt, Chaya Ben-Yehuda, and Yarin Udi for completing their Master's degrees

16.2.24

Announcement

Tree-G: a new tool for learning over graphs

Omri%20Armstrong_edited.jpg

Graphs, such is protein interaction graphs, social contact graphs, and molecular structure graphs, are commonly used. Learning over graphs is challenging and is done using Graph Neural Networks (GNNs) in most cases. We introduce a new method that uses trees and gradient boosting for the task. It outperforms GNNs in many cases. Try it yourself

9.11.23

New Paper

A Work in Progress: Tighter Bounds on the Information Bottleneck for Deep Learning

Omri%20Armstrong_edited.jpg

The field of Deep Neural Nets (DNNs) is still evolving and new architectures are emerging to better extract information from available data. The Information Bottleneck, IB, offers an optimal information theoretic framework for data modeling. However, IB is intractable in most settings. In recent years attempts were made to combine deep learning with IB both for optimization and to explain the inner workings of deep neural nets. VAE inspired variational approximations such as VIB became a popular method to approximate bounds on the required mutual information computations. This work continues this direction by introducing a new tractable variational upper bound for the IB functional which is empirically tighter than previous bounds. When used as an objective function it enhances the performance of previous IB-inspired DNNs in terms of test accuracy and robustness to adversarial attacks across several challenging tasks. Furthermore, the utilization of information theoretic tools allows us to analyze experiments and confirm theoretical predictions in real world problems.

20.9.23

New Paper

Fake, deep-fake, the video law and the principle of narrow interpretation: what is the law?

Omri%20Armstrong_edited.jpg

How should courts act upon deep-fakes? should legislators modify existing laws to address it? In a new paper (in Hebrew) we study these issues and suggest that adding clauses to existing legislation specific to deep-fake might create problems in laws that do not add such clauses due to the principle of narrow interpretation.

20.6.23

Announcement

Congratulations to Daniel, Neta, Lotan, Yuval and Yuval on their great undergrad projects.

Omri%20Armstrong_edited.jpg

Congratulations to Daniel Sarusi, Neta Biran, Lotan Hacohen, Yuval Reingold, and Yuval Argoetti on very successful presentations of their undergrad projects. Click the logo ← to see a short video (in Hebrew) about Yuval A.'s project.

25.5.23

New Paper

The Case Against Explainability

Omri%20Armstrong_edited.jpg

Explainability has been proposed as a possible solution to some of the risks emerging from recent advances in AI. In this paper, we study explanations from legal point of view and show that many of the reasons for requiring explanations cannot be fulfilled by AI systems. Moreover, in some cases, these explanations can increase risks instead of mitigating them.

bottom of page