Detailed and accurate annotation and analysis of complex behaviors are necessary for understanding the underlying neural and molecular mechanisms. The fruit fly Drosophila melanogaster is one of the most accessible and well-studied model organisms for identifying the neuronal and molecular underpinnings of behavior. Multiple large-scale screens have been conducted in Drosophila to study complex social behaviors such as aggression and courtship (Asahina, 2017; Greenspan and Ferveur, 2000; Hall, 2002; Kravitz and Fernandez, 2015) to identify the underlying neural circuitry (Agrawal et al., 2020; Asahina et al., 2014; Davis et al., 2018; Hoopfer et al., 2015; Yadav et al., 2024) and genes involved (Agrawal et al., 2020; Benzer, 1967; Gill, 1963; Hall, 1978; Ishii et al., 2022; Wang et al., 2008). These behaviors exhibit distinct, stereotyped patterns. For example, aggression involves chasing, fencing (Jacobs, 1960), wing threats, boxing (Dow and von Schilcher, 1975), lunging, and tussling (Hoffmann, 1987a; Hoffmann, 1987b). Similarly, courtship consists of multiple stereotyped behaviors exhibited by the male fly, such as orienting, circling, and following the female (Cook and Cook, 1975; Markow, 1987; O’Dell, 2003). To stimulate the female to be more receptive, the male produces a species-specific song by vibrating and extending its wing (Bennet-Clark and Ewing, 1969; Swain and von Philipsborn, 2021). The male then attempts copulation by curling its abdomen and finally mounts the female for copulation (Bastock and Manning, 1955; Spieth, 1974).
Manual analysis by trained observers is considered the gold standard in behavioral analysis, but it is time-consuming and unsuitable for large-scale screens (Gomez-Marin et al., 2014; Robie et al., 2017a). ‘Computational ethology’ (Anderson and Perona, 2014; Datta et al., 2019) helps address this challenge by automating behavioral annotation by leveraging advances in computer vision and machine learning (Robie et al., 2017b). This enables high-throughput behavioral screening to identify responsible genes and circuits.
A typical computational ethology workflow involves recording animal behaviors and tracking their positions along with body movements. This is followed by the analysis and classification of the observed behaviors from hundreds to thousands of video frames capturing behavioral instances. Several software programs, such as Ctrax, Caltech FlyTracker, and Deep Lab Cut (Branson et al., 2009; Eyjolfsdottir et al., 2014; Mathis et al., 2018), are widely used for tracking behaviors in Drosophila. Each comes with strengths and weaknesses. Ctrax (Branson et al., 2009) can accurately track fly position and movement, but identity switches remain a challenge, especially when tracking groups of flies. While both Ctrax and FlyTracker (Eyjolfsdottir et al., 2014) may produce identity switches, when groups of flies were tracked simultaneously, Ctrax led to inaccuracies that required manual correction using specialized algorithms such as FixTrax (Bentzur et al., 2021).
The effectiveness of various machine learning pipelines is eventually measured by comparing their output to human annotation, called ‘ground-truthing’. A rule-based algorithm such as CADABRA (Dankert et al., 2009) is used to quantify aggression, but it can lead to mis-scoring and identity switches, as revealed by ground-truthing (Simon and Heberlein, 2020), which needs to be corrected in a semiautomated manner (Kim et al., 2018). MateBook (Ribeiro et al., 2018) is another rule-based algorithm used to quantify courtship; however, similar to CADABRA, it tends to miss true-positive events, leading to significant mis-scoring of behaviors under certain experimental conditions.
The Janelia Automatic Animal Behavior Annotator (JAABA) (Kabra et al., 2013) addresses the challenges of rigid rule-based approaches by employing a supervised learning approach. In the JAABA pipeline, user-labeled data are utilized for training to encompass the dynamic variations in behaviors, allowing it to predict behaviors on the basis of learning from input data.
Several studies have developed JAABA-based behavioral classifiers for measuring aggression (Chiu et al., 2021; Chowdhury et al., 2021; Duistermars et al., 2018; Leng et al., 2020; Tao et al., 2024) and courtship (GilMartí et al., 2023; Pantalia et al., 2023). However, many of these studies did not make these classifiers publicly available (Duistermars et al., 2018; GilMartí et al., 2023; Pantalia et al., 2023). In other cases, the reported approaches relied on specialized hardware, such as custom 3D-printed parts (Chowdhury et al., 2021; GilMartí et al., 2023), or high-end machine-vision cameras (Chiu et al., 2021; Chowdhury et al., 2021; Duistermars et al., 2018; Hindmarsh Sten et al., 2025; Leng et al., 2020; Tao et al., 2024), limiting their accessibility and wider adoption.
Here, we describe DANCE (Drosophila Aggression and Courtship Evaluator), an open-source, user-friendly analysis and hardware pipeline to simplify and automate the process of robustly quantifying aggression and courtship behaviors. DANCE has two components: (1) A set of robust, machine vision-based behavioral classifiers developed using JAABA to quantify aggression and courtship. (2) An inexpensive hardware setup built from off-the-shelf materials and consumer smartphones for behavioral recording. Compared with previous methods (Dankert et al., 2009; Ribeiro et al., 2018), the DANCE classifiers improved accuracy and reliability, while its low-cost hardware eliminates the need for specialized arenas and cameras. All classifiers and analysis codes are publicly available, enabling broad adoption, especially in resource-limited settings. Together, DANCE provides a powerful, accessible platform for behavioral screening and the discovery of mechanisms underlying complex social behaviors and neurological disorders.