Deep Learning Applications for Physiological Data in Emergency and Combat Medicine: Advances, Challenges, and Future Directions

MC
20 min readFeb 11, 2025

--

Introduction

Emerging deep learning models are increasingly applied to physiological data for critical decision-making in emergency medicine and combat casualty care. These models leverage signals like electrocardiograms (ECGs), continuous vital signs from monitors or wearables, electronic health records (EHRs), and even unstructured text (e.g. triage notes) to predict patient risk and guide triage. The goal is to assist clinicians and medics in rapidly identifying life-threatening conditions (e.g. hemorrhagic shock, cardiac arrest, sepsis) and making timely interventions. Despite notable progress, developing robust deep learning systems for these high-stakes environments faces significant challenges. Data from real emergencies or battlefield scenarios are often scarce, and models trained on one setting may not generalize to another. Moreover, issues of bias, interpretability, and reliability under stress conditions limit deployment. This review examines the latest advancements in deep learning on physiological data for emergency and combat medicine, highlights major research gaps, and outlines future directions needed to bridge these gaps. Key topics include the integration of multi-modal data (ECG, vitals, EHR, text), the role of simulated data to overcome data scarcity, and the critical concerns of bias, generalizability, and robustness in current methodologies.

Latest Advancements in Deep Learning for Physiological Data

Deep Learning for ECG and Vital Sign Signals

Deep learning has shown great promise in analyzing raw physiological signals such as ECG and photoplethysmography (PPG). For instance, convo lutional neural networks (CNNs) and recurrent models can learn complex patterns in ECG waveforms to detect arrhythmias or ischemic changes with expert-level accuracy. This marks a shift from earlier systems that relied on hand-crafted features. A recent survey noted that while deep learning had already revolutionized 2D medical imaging, its application to 1D physiological signals was only just gaining momentum [1]. Since then, researchers have developed powerful models for tasks like arrhythmia classification, prediction of cardiac arrest, and hemodynamic instability detection. In emergency settings, these models can provide early warnings — e.g. identifying ventricular tachycardia or fibrillation from ECG in real-time, or flagging hypotensive episodes from continuous blood pressure and heart rate trends. One study demonstrated a deep learning algorithm that could predict in-hospital cardiac arrest with high sensitivity and low false-alarm rate by continuously monitoring vital signs [2]. Similarly, deep neural networks have been trained on multi-lead ECGs to recognize subtle signs of cardiac ischemia or electrolyte disturbances faster than humans. These advancements suggest that deep learning on physiologic waveforms can augment clinicians by spotting critical events that might be missed, thereby improving triage and intervention in emergencies.

Wearable Sensors and Early Warning Systems

Outside the hospital or in combat fields, wearable sensors provide continuous vital sign monitoring that deep learning models can exploit for early warning of deterioration. Lightweight models have been developed to run on low-power devices at the edge, enabling real-time analysis of data from smartwatches, chest straps, or patch monitors. For example, Giordano et al. (2024) introduced SepAl, a tiny deep learning model that uses only six vital signs from wearable sensors (such as PPG, accelerometer, and skin temperature) to predict sepsis onset nearly 10 hours in advance [3]. This approach deliberately avoids reliance on lab tests, aiming for deployability in field and resource-limited environments.

Several groups focus on deployable, resource-efficient ML frameworks, often on Android platforms, to provide real-time triage in austere or combat environments [13, 14, 15]. These systems, such as the 4TDS (Tactical Triage & Treatment Decision Support), can integrate wearable sensor data (e.g., heart rate, SpO2) and deliver actionable alerts even with limited connectivity [13, 14]. Researchers are also looking into multimodal integration of wearable signals, EHR, and text-based triage notes, though few solutions so far explicitly target the unique constraints of battlefield care [15].

In combat medicine, similar concepts are applied to detect hemorrhage or shock in wounded soldiers via wearable devices that track heart rate, blood pressure, oxygen saturation, and other signals. Continuous prediction models can alert medics to internal bleeding or shock before obvious signs appear, thus guiding triage (e.g. prioritizing which casualty needs urgent evacuation). Recent machine learning–based early warning systems have shown improved accuracy over traditional threshold-based scores in predicting physiological deterioration [4]. For instance, ML models using streaming vitals achieved higher area-under-curve than conventional early warning scores for detecting cardiorespiratory instability. The latest deep learning architectures (including temporal convolutional networks and LSTMs) are being optimized for robustness and power-efficiency so they can be embedded in wearables, delivering alerts on the battlefield or in prehospital care without requiring cloud computation.

Electronic Health Records and Multimodal Triage Decision Support

In the emergency department (ED), triage decisions can be informed by a wealth of data from electronic health records — including structured vital signs, demographics, prior history, and free-text notes like chief complaints or nurse assessments. Deep learning models combining these multimodal inputs have shown success in predicting patient outcomes and resource needs. A 2022 study developed a deep learning–based triage system using ED EHR data to predict clinical outcomes such as hospital admission, ICU transfer, or mortality [5]. The model fused information by transforming structured data into a text-like sequence and then processing it with a hybrid CNN-RNN architecture with attention. It achieved an AUROC around 0.87 for predicting hospitalization, outperforming traditional triage acuity scores and logistic regression baselines. Notably, this model improved prediction of critical outcomes by 3–5% in accuracy compared to conventional methods. Another group introduced an interpretable deep learning triage tool that incorporates both numeric data and text from triage notes, yielding predictions on which patients will need critical care. Such models address the common issue in ED triage where many patients get assigned an intermediate urgency (e.g. ESI level 3) due to subjective judgment. By objectively analyzing patterns in vital sign trajectories and language in complaints, deep learning can refine risk stratification and reduce under-triage or over-triage. There is also progress in natural language processing (NLP) at triage — for example, using transformer-based models (like BERT) to interpret chief complaints or EMS narratives. Early studies show that NLP models leveraging triage free-text can improve prediction of outcomes such as admission or need for critical intervention [6]. However, these remain mostly in research stages. Overall, the latest systems demonstrate that integrating multimodal patient data through deep learning can enhance triage decisions, potentially guiding resource allocation more accurately than current triage scales.

Applications in Combat Casualty Care

Combat medicine presents extreme conditions for decision-making: patients with traumatic injuries in austere environments, limited diagnostics, and often few expert providers. AI researchers are adapting deep learning models to support Tactical Combat Casualty Care (TCCC), focusing on problems like hemorrhage detection, shock prediction, and triage in mass-casualty incidents. For example, Nemeth et al. (2021) developed a field-deployable decision support system that collects vital signs and clinical data on a phone/tablet and uses a machine learning model to detect hemorrhagic shock [7]. Interestingly, their model — trained on a combination of ICU datasets (MIMIC and Mayo Clinic data) — could predict the onset of shock 90 minutes before clinical manifestation with over 75% accuracy. While their best-performing algorithm was a logistic regression in that case, subsequent work is exploring deep learning for improved sensitivity. In general, predicting need for life-saving interventions (like massive transfusion or surgical control of bleeding) is a key task under study. A 2023 review on AI for hemorrhagic trauma care found numerous machine learning models that outperform traditional trauma scoring systems in predicting outcomes like massive transfusion requirements and 48-hour mortality [8]. Many of these models use readily available variables (heart rate, blood pressure, mental status, etc.), aligning with the constraints of field care. Deep neural networks (DNNs) and ensemble methods have been applied to trauma databases to identify patients at risk of acute coagulopathy or shock early in their course. Moreover, the U.S. Defense Advanced Research Projects Agency (DARPA) has launched a Triage Challenge to spur development of algorithms that identify “physiological signatures of injury” for mass casualty triage [9]. As part of this, teams are employing remote sensors (e.g. cameras, UAV-mounted detectors) feeding deep learning models to estimate which victims are critical, even before human medics reach them. These efforts represent the latest push in combat medicine AI — leveraging multimodal sensor data and deep learning to make triage faster and more accurate in disasters or battlefield scenarios. While promising, most are in prototype or evaluation phases. The challenge remains to ensure these models maintain accuracy under the noisy, variable conditions of real deployments.

Recent combat-focused ML research underlines the importance of real-time shock/hemorrhage risk prediction. Notably, Nemeth et al. [2, 3] developed an Android-based triage system validated against Mayo Clinic trauma data, reporting shock-detection AUROCs around 0.83–0.87, while others employed XGBoost-driven triage tools with similar accuracy [19]. These frameworks are designed to function in resource-depleted combat zones, providing predictions that can guide medics in prioritizing care.

Real-World Data vs. Simulated Data in Model Training

Access to large, high-quality real-world emergency medical data is a major bottleneck for training deep learning models. Life-threatening events (like multi-trauma, sepsis, cardiac arrest) are relatively infrequent and heterogeneous, making it hard to gather enough diverse examples. Additionally, combat casualty data are often classified or not systematically recorded. To overcome these issues, researchers are increasingly turning to simulated data and synthetic data to augment model training in data-scarce scenarios. There are two primary approaches: simulating data via physiological models and generating synthetic data via deep generative techniques.

Physiological Simulation: Advanced human physiology simulators (e.g. BioGears engine or other pharmacokinetic models) can generate vital sign trajectories under various injuries and interventions. By leveraging known physiology, one can create virtual patients experiencing, say, hemorrhagic shock, tension pneumothorax, or septic shock, and produce continuous vital signs and lab trends for these scenarios. A recent study described a pipeline using simulation software to produce “diverse, clinically relevant scenarios” for trauma care, specifically to address data-scarce conditions [10]. By varying parameters, they generated a wide range of scenarios (different severities, patient profiles, delays in treatment, etc.), which can then be used to train or stress-test models. Simulated datasets allow us to target rare events or edge cases systematically — for example, creating many examples of massive hemorrhage to teach a model what early subtle signs to look for. These synthetic patients also enable exploration of model performance under controlled variations (e.g. adding noise, or missing data to mimic sensor dropouts). One application showed that models trained on simulated PPG signals (which were derived from real ECG-based RR intervals) successfully detected bradycardia and tachycardia in real wearable data [11]. This demonstrates that carefully simulated physiological signals can stand in for real data in training, yielding models that generalize to actual patients. Similarly, simulated data has been used in reinforcement learning frameworks where an AI agent learns optimal treatment policies (for instance, fluid resuscitation strategies) by interacting with a virtual patient model — something not feasible directly on real critically ill patients.

Beyond just using traditional physiological simulation engines like BioGears, there is growing interest in combining physics-based models with ML to produce highly realistic hemorrhage or instability scenarios [21, 17, 18]. For example, lower-body negative pressure experiments have been leveraged to simulate hemorrhagic shock in a controlled manner, thereafter training or validating algorithms under a spectrum of controlled decompensations [17]. Moreover, multimodal hybrid approaches — where a physics-based system provides a realistic baseline and a machine learning model adds noise or variability — help approximate battlefield conditions more accurately. Nonetheless, as highlighted by Banerjee and Ghose (2021), the simulation-to-reality domain gap remains an open challenge [19], underscoring the need for domain adaptation strategies or partial retraining on real-world data [18].

Additional Considerations for Sim-to-Real Adaptation: Certain studies address the domain mismatch between synthetic and real-world datasets more directly. As recommended by multiple reviews, advanced methods like transfer learning or domain adaptation can help bridge these gaps [21, 18, 19]. While these techniques show promise — especially for hemorrhage risk classifiers — robust operational trials are still required to confirm real-world performance improvements.

Generative Synthetic Data: Another approach uses generative adversarial networks (GANs) or other deep generative models to create synthetic patient data that statistically resemble real datasets. For example, GANs have been trained to produce realistic “DeepFake” ECG waveforms that mimic real patient ECGs beat-for-beat [12]. In one study, over 120,000 synthetic 12-lead ECGs were generated that closely matched the distribution of real ECGs (with matching QRS durations, QT intervals, etc.), while containing no identifiable patient information. The primary motivation there was to enable data sharing and augmentation without privacy concerns. Such synthetic datasets can supplement limited real data when training deep learning models — effectively acting as a form of data augmentation to improve model generalizability. Beyond ECGs, GAN-based synthesis has been applied to other biosignals and even to creating simulated EHR records. For instance, models can generate fictitious vital sign sequences or lab trajectories for a hypotensive patient, enriching the training set’s diversity. A critical consideration is ensuring the synthetic data are realistic enough so that models trained on them will perform well on genuine data. To this end, researchers often validate synthetic data by comparing known clinical markers or by testing trained models on a holdout real dataset (as done in the PPG arrhythmia detection study, where the CNN was trained on simulated signals and tested on real signals [11]). Simulated and real data integration is also a trend: models might be pre-trained on large volumes of simulator data, then fine-tuned on a smaller real dataset to calibrate them to real-world distribution [10]. This sim-to-real transfer is analogous to methods in robotics and is increasingly used in medical AI to compensate for limited real data.

Challenges with Simulated Data: While simulation is a powerful tool, it comes with caveats. Simulated patients may not capture the full complexity and variability of real human physiology or clinical noise. There is a risk that models rely on artifacts present in simulation but absent in reality, or vice versa. This simulation-to-reality gap means that careful validation on real cases is essential. Techniques like domain adaptation are being explored to make models more invariant to whether data came from a simulator or the real world. Despite these issues, simulation remains a crucial strategy for scenarios like combat medicine, where collecting extensive real data is impractical or unethical. The latest research emphasizes combining both real and simulated data to leverage the strengths of each — using simulation to explore and populate rare conditions, and real-world data to ground the models in actual patient distributions.

Gaps in Current Deep Learning Methodologies

Despite encouraging progress, significant gaps and challenges remain in applying deep learning to physiological and emergency data. These gaps span technical, clinical, and ethical dimensions:

  • Data Bias and Limited Diversity: Many current models are trained on retrospective hospital datasets that may not represent the broader patient population or field conditions. As noted in a recent trauma AI review, model performance is often evaluated on a “conforming population” — meaning a relatively homogeneous group — which limits confidence in its wider applicability. Biases in the training data (e.g. underrepresentation of certain ages, ethnicities, or injury patterns) can lead to biased predictions. For example, an ML triage model trained predominantly on adult urban hospital data might mis-triage pediatric or rural patients. In combat scenarios, civilian ICU data used for training may not account for the physiology of young healthy soldiers or the noise of battlefield vital signs. If not addressed, these biases risk exacerbating disparities or making the model unreliable for minority groups. Bias can creep in at multiple stages — from data collection (who gets certain tests or how vitals are recorded) to label definitions (outcomes influenced by prior systemic biases). Recognizing and mitigating bias (through careful dataset curation, re-sampling strategies, or fairness-aware algorithms) is a key gap in current practice.
  • Generalizability and Validation: A frequent critique is that models show excellent metrics in internal validation but fail to generalize to new settings. Most studies to date have been retrospective and site-specific. For instance, an ED triage model developed on one hospital’s EHR might not perform as well at another hospital with different patient demographics or slightly different triage protocols. Similarly, deep models trained on simulated data might not fully translate to live patients without adaptation. External validation and prospective trials are sparse. The literature emphasizes the need for prospective evaluation of these models in real clinical workflows. Without this, it’s unclear how models will behave under true emergency conditions (where clinicians and patients interact with the AI under time pressure). The lack of standardized outcome measures and benchmarks further hampers comparison across studies. This gap calls for community efforts to establish common evaluation datasets or simulation-driven benchmarks for emergency AI, and to conduct multi-center trials to test generalizability.
  • Interpretability and Transparency: Deep learning models are often criticized as “black boxes,” which is problematic in high-stakes medical decisions. Clinicians are understandably wary of algorithms that cannot explain why a patient is flagged as critical. Some recent works have included attention mechanisms or generated feature importance to improve interpretability (for example, highlighting which vital sign trends most influenced a shock prediction). However, model explainability remains limited. The hemorrhagic trauma ML review explicitly calls for future research to increase model explainability, so that specific features driving each prediction can be identified and vetted. Interpretability is not only important for clinician trust, but also for debugging bias — e.g. to ensure a triage model isn’t inadvertently using race or socioeconomic proxies as predictors. There is a gap in methods to provide user-friendly explanations of deep model outputs in emergency settings. We need more work on techniques like interpretable neural nets, post-hoc explanation tools, or hybrid models that incorporate mechanistic understanding (e.g. known physiology equations) to make AI decisions more transparent.
  • Robustness and Reliability: Emergency and combat environments are unpredictable — sensors can fail, noise and motion artifacts abound, and patients may present with novel combinations of problems. Current deep learning models can be brittle in the face of such variability. For example, an algorithm might perform well on clean ICU monitor data but falter on a noisy wearable signal with motion artifacts. Similarly, natural shifts in data distribution (say, moving from a hospital setting to a prehospital ambulance setting) can degrade performance. Robustness to noise, missing data, and distribution shift is a critical gap. Few models are evaluated for how they handle missing vital signs (a common issue if a monitor is disconnected) or corrupted data. Moreover, adversarial robustness — while perhaps less of a concern for intentional attacks in this domain — translates to ensuring that minor, irrelevant perturbations (like an ECG electrode noise burst) don’t cause major mispredictions. Techniques like data augmentation (adding noise, random dropouts during training) and ensemble predictions can help, but need wider adoption. Another aspect of robustness is extremes and edge cases: models should maintain sensible behavior for out-of-range inputs or novel scenarios (for example, a combination of injuries not seen in training). Currently, many models are not stress-tested beyond the typical range of their training data.
  • Integration with Clinical Workflow: This is more of an implementation gap but deeply affects success. Many deep learning models operate as standalone predictions and haven’t been seamlessly integrated into triage workflows or medic protocols. Issues such as alert fatigue (too many false alerts can be distracting or lead to alarms being ignored), usability of the AI interface, and how the AI’s recommendation is presented to a busy clinician are often overlooked in research studies. An algorithm might technically be accurate, but if it’s not delivered in a user-centric way or if it disrupts normal processes, it won’t be adopted. Thus, evaluating human-AI interaction and ensuring the decision support is intuitive and actionable is essential. Currently, very few studies assess the acceptability of these AI tools among emergency staff. For combat medics, considerations like training, cognitive load (the system must be simple under combat stress), and reliability without technical support are crucial. We can consider these as gaps in the translation from lab to field.

In addition to the well-known challenges of biases and generalizability, operational validation of synthetic data remains a pressing concern [13, 14]. Many teams rely on retrospective datasets like NTDB or simulated signals, but the domain gaps in real deployments have proven difficult to bridge [21, 18]. Moreover, ethical concerns over demographic under-representation in synthetic data or overreliance on lab-derived simulations could exacerbate biases rather than alleviate them [16]. Greater attention to diversity, representative cohorts, and careful evaluation of synthetic data’s fidelity is paramount before adopting these systems widely in combat or emergency care.

In summary, the major gaps include data limitations (scarcity and bias), model issues (black-box nature and fragility to shifts), and validation shortcomings (lack of prospective, real-world testing). Addressing these gaps is vital before deep learning systems can be reliably trusted for emergency and combat triage, where lives are on the line.

Future Directions for Research and Solutions

Moving forward, research in this domain is converging on several key directions to overcome current limitations:

  • Enhanced Data Strategies: To tackle data scarcity and bias, the community needs to build larger, more diverse datasets and share them responsibly. One approach is federated or multi-institutional learning, where models are trained across data from multiple hospitals or military cohorts without centralizing sensitive data. This can increase diversity and reduce site-specific bias. Additionally, continued development of simulated and synthetic data will play a role — not to replace real data, but to complement it. As simulation engines improve in fidelity, they could generate realistic vital-sign progressions for numerous trauma and medical scenarios, which when combined with clever domain adaptation, will yield models robust to rare events. Data augmentation techniques should become standard: e.g. adding noise, waveform distortions, or simulating sensor failures during training to make models resilient. Another promising avenue is creating “digital twin” patient simulations: using a patient’s initial data to simulate multiple possible trajectories and training models to recognize early which trajectory a patient is on. All these strategies must be paired with careful curation to ensure no subgroup is left underrepresented.
  • Multimodal and Transfer Learning Approaches: Future deep learning models will likely ingest an even richer set of inputs — combining multimodal data such as vitals, lab results, imaging (if available), and free text. Developing architectures that can effectively fuse these heterogeneous data types is a priority. Recent transformer-based models and graph neural networks that represent patients with multi-source data are candidates for this. Moreover, transfer learning from other domains could be exploited. For example, large “foundation models” trained on general physiological time-series (perhaps using self-supervised objectives on massive unlabeled datasets, including simulations) could be fine-tuned for specific tasks like combat triage. This mirrors what has been successful in NLP and computer vision, but needs adaptation for biomedical signals. Some early work suggests, however, that off-the-shelf foundation models for time series don’t yet handle physiological data well, implying domain-specific model development is required. Nonetheless, the idea of a generalizable pretrained model that “understands” human physiology and can be specialized to low-data tasks is compelling. We may also see cross-domain transfer, e.g., using knowledge from critical care (ICU data, which is richer) to inform models in prehospital care (sparser data), or vice versa.
  • Interpretability and Human-in-the-Loop ML: Researchers are increasingly focused on making AI recommendations interpretable and clinically intuitive. Future solutions might incorporate explainability by design — for instance, models that provide a textual rationale (“patient likely in shock due to rapidly dropping blood pressure and rising heart rate”) alongside the prediction. Techniques like attention maps over time-series (highlighting which time window or which vital sign is most important) or example-based explanations (showing similar past patients from training data and their outcomes) could build clinician trust. Additionally, human-in-the-loop approaches will be important: allowing medics to input their observations (e.g. “bleeding controlled”) which the model can incorporate, or enabling the model to ask for clarification when the data are ambiguous. This interactive paradigm can improve performance and acceptance, essentially creating a partnership between AI and clinician. Continuous feedback from users in deployment can be used to update and refine the model (a form of ongoing learning, while carefully preventing drift).
  • Robustness and Validation Frameworks: To ensure reliability, future research must rigorously test models under varied conditions. This includes stress-testing with adversarial or extreme scenarios (possibly using simulation to generate edge cases) to see where models break, and then fortifying them. Expect more work on uncertainty estimation in model outputs — having the model indicate when it is not confident, so that clinicians can double-check or default to standard protocols in those cases. From a methodology standpoint, techniques like Bayesian deep learning or ensemble modeling can provide measures of confidence. On the validation front, we will likely see prospective clinical trials of AI-assisted triage. For example, deploying a deep learning triage assistant in a few emergency departments to measure if it improves patient outcomes or workflow efficiency. Similarly, military medical research might conduct field simulations or wargames with AI triage systems to evaluate their impact. Such studies will generate evidence on the real-world efficacy and pitfalls of these systems. The insights will feed back into model improvements (for instance, if a trial finds clinicians ignored alerts of a certain type, developers can adjust the alert logic or interface).

Future work should also address ‘Sim-to-Real Transfer’ head-on, potentially through stronger domain adaptation strategies [21, 18, 19]. As recommended in multiple reviews, improved alignment between synthetic and real datasets (e.g., advanced physics models plus realistic noise or variability from field data) could produce more trustworthy ML predictions [18]. Another priority is multimodal data fusion — particularly for prolonged field care — where combining sensor data with textual triage notes could yield large performance gains [20, 15]. Lastly, while some groups have quietly tested these models in pilot or simulated field environments [13, 14], robust operational trials remain critical to confirm real-world efficacy.

  • Integration and Workflow Alignment: Future solutions must be designed with implementation in mind. This means robust software that integrates into existing monitor systems or EHRs, with simple user interfaces. We anticipate more collaboration between AI researchers and clinicians to co-design decision support tools that fit naturally into emergency medicine workflows or combat protocols. For combat medicine, ruggedized, offline-capable AI tools with intuitive visual or audio alerts (given the harsh environment) are needed. Training programs will also be important — educating clinicians and medics on how the AI works, its limitations, and how to interpret its guidance. By aligning development with user needs and context from the start, future AI tools have a better chance of adoption and sustained use.

In conclusion, deep learning for physiological data in emergency and combat medicine is a rapidly advancing field with immense potential to save lives through better triage and decision support. The latest models can process complex biosignals and data streams to detect deterioration earlier and more accurately than traditional methods. Yet, to fully realize this potential, the community must address key gaps around data, bias, generalizability, and trust. Integrating simulated data to cover rare events, improving explainability and robustness, and thoroughly validating in real settings are all crucial steps. Encouragingly, researchers and practitioners are increasingly aware of these challenges. Through interdisciplinary collaboration — combining machine learning innovation with domain expertise in emergency and military medicine — future solutions will likely feature more generalizable, interpretable, and resilient AI models. These will be tested in pragmatic trials and refined with user feedback. The path forward requires not just smarter algorithms, but also smarter approaches to training them (e.g. leveraging simulation and transfer learning) and deploying them responsibly. With these advancements, we move closer to AI-enhanced emergency care where medics and algorithms work in tandem to triage effectively and improve patient outcomes under pressure.

References

  1. Rim B et al., Deep Learning in Physiological Signal Data: A Survey, Sensors, 2020
  2. Kwon JM et al., An Algorithm Based on Deep Learning for Predicting In‐Hospital Cardiac Arrest, Journal of the American Heart Association, 2018
  3. Giordano M et al., SepAl: Sepsis Alerts On Low Power Wearables With Digital Biomarkers and On-Device Tiny Machine Learning, IEEE Sensors Journal, 2024
  4. Muralitharan S et al., Machine Learning–Based Early Warning Systems for Clinical Deterioration: Systematic Scoping Review, Journal of Medical Internet Research, 2021
  5. Yao L et al., A Novel Deep Learning–Based System for Triage in the Emergency Department Using Electronic Medical Records: Retrospective Cohort Study, Journal of Medical Internet Research, 2021
  6. Stewart J et al., Applications of natural language processing at emergency department triage: A narrative review, PLoS One, 2023
  7. Nemeth C et al., Decision Support for Tactical Combat Casualty Care Using Machine Learning to Detect Shock, Military Medicine, 2023
  8. Peng HT et al., Artificial intelligence and machine learning for hemorrhagic trauma care, Military Medical Research, 2023
  9. “Tackling the Challenge of Mass Casualty Triage with Technology”, Battelle Technical Report, 2023
  10. Christenson et al., Assessing Foundation Models’ Transferability to Physiological Signals in Precision Medicine, ArXiv, 2024
  11. Sološenko A et al., Training Convolutional Neural Networks on Simulated Photoplethysmography Data: Application to Bradycardia and Tachycardia Detection, Frontiers in Physiology, 2022
  12. Thambawita V et al., DeepFake electrocardiograms using generative adversarial networks are the beginning of the end for privacy issues in medicine, Scientific Reports, 2021
  13. Nemeth C, et al. TCCC Decision Support With Machine Learning Prediction of Hemorrhage Risk, Shock Probability. Military Medicine. 2023.
  14. Nemeth C, et al. Real Time Battlefield Casualty Care Decision Support. Healthcare and Medical Devices. 2022.
  15. Gathright R, et al. Overview of Wearable Healthcare Devices for Clinical Decision Support in the Prehospital Setting. Sensors. 2024.
  16. Stallings JD, et al. APPRAISE-HRI: An Artificial Intelligence Algorithm for Triage of Hemorrhage Casualties. Shock. 2023.
  17. Joyner M. Effects of Simulated Pathophysiology on the Performance of a Decision Support Medical Monitoring System for Early Detection of Hemodynamic Decompensation in Humans. 2014.
  18. Mazumder O, et al. Synthetic PPG Signal Generation to Improve Coronary Artery Disease Classification: Study With Physical Model of Cardiovascular System. IEEE Journal of Biomedical and Health Informatics. 2022.
  19. Banerjee R, Ghose A. Synthesis of Realistic ECG Waveforms Using a Composite Generative Adversarial Network for Classification of Atrial Fibrillation. European Signal Processing Conference. 2021.
  20. Qian L, et al. Uncertainty-Aware Deep Attention Recurrent Neural Network for Heterogeneous Time Series Imputation. ArXiv. 2024.
  21. Jin X, et al. AI algorithm for personalized resource allocation and treatment of hemorrhage casualties. Frontiers in Physiology. 2024.

--

--

No responses yet