Contents
Download PDF
pdf Download XML
63 Views
4 Downloads
Share this article
Research Article | Volume 15 Issue 9 (September, 2025) | Pages 643 - 649
Artificial Intelligence–Assisted ECG Interpretation versus Conventional Reporting in Predicting Arrhythmias in Acute Coronary Syndrome: A Diagnostic Accuracy Study
1
MBBS, DNB, Department of General Medicine, FIEC, MNAMS, Maharaja Agrasen Hospital, Jaipur
Under a Creative Commons license
Open Access
Received
Aug. 13, 2025
Revised
Aug. 30, 2025
Accepted
Sept. 19, 2025
Published
Sept. 24, 2025
Abstract

Background: Accurate and timely arrhythmia detection in acute coronary syndrome (ACS) is critical for improving outcomes. Artificial intelligence (AI)–enabled ECG interpretation may offer advantages over conventional physician reporting. Objective: To compare the diagnostic performance of AI-assisted versus conventional ECG interpretation in predicting arrhythmias among ACS patients. Methods: In this prospective diagnostic accuracy study, 1,000 ACS patients underwent ECG evaluation using both an AI-based system and physician interpretation. Confirmed arrhythmic events from continuous cardiac monitoring served as the reference standard. Diagnostic metrics (sensitivity, specificity, AUC), agreement (Cohen’s κ), and time to diagnosis were assessed. Results: AI interpretation achieved higher sensitivity (97.5% vs. 86.7%), specificity (91.8% vs. 81.7%), and diagnostic accuracy (93.4% vs. 83.1%) compared to physician reporting (p < 0.001). The AUC was significantly greater for AI (0.991; 95% CI: 0.987–0.995) than for conventional methods (0.919; 95% CI: 0.899–0.937). AI also reduced time to diagnosis (1.8 ± 0.6 min vs. 6.5 ± 1.2 min; p < 0.001). Agreement with the reference standard was higher for AI (κ = 0.85) than for physicians (κ = 0.67). Conclusion: AI-assisted ECG interpretation demonstrated superior diagnostic accuracy and efficiency in detecting arrhythmias in ACS patients. Its integration into acute cardiac care may enhance early triage and treatment, though further validation is warranted.

Keywords
INTRODUCTION

Acute coronary syndrome (ACS) remains a leading cause of morbidity and mortality worldwide, with timely recognition of arrhythmias being critical to improving outcomes. The electrocardiogram (ECG) continues to serve as the cornerstone of ACS diagnosis and arrhythmia detection, yet its interpretation is often challenged by inter-observer variability, time delays, and the complexity of subtle electrocardiographic changes. Traditional physician-based ECG reporting, while highly valuable, is inherently limited in scalability and may be prone to diagnostic error in high-volume emergency settings [1].

Recent advances in artificial intelligence (AI), particularly artificial neural networks and deep learning models, have demonstrated considerable potential in enhancing ECG interpretation. A growing body of literature indicates that AI-assisted ECG systems can provide accurate, reproducible, and rapid interpretation of ACS-related patterns, including arrhythmias [2,3]. These models have been trained on large datasets, enabling them to detect subtle abnormalities that may be overlooked by human readers, thereby addressing gaps in conventional interpretation [4].

Systematic reviews and meta-analyses have underscored the diagnostic accuracy of AI algorithms, with several models achieving sensitivity and specificity comparable to or exceeding those of trained cardiologists in ACS detection [2,5]. Furthermore, AI-based tools not only enhance diagnostic accuracy but also have the potential to improve triage decisions, reduce time to treatment, and support personalized patient management in emergency and critical care settings [3].

Despite these advancements, challenges remain regarding the clinical integration of AI into routine practice. Issues of generalizability across populations, ethical considerations, and the need for prospective validation continue to be raised [6]. Moreover, few studies have directly compared AI-assisted ECG interpretation with conventional physician reporting in the specific context of arrhythmia prediction among ACS patients.

Given this background, the present study was undertaken to evaluate the diagnostic accuracy of AI-assisted ECG interpretation versus conventional reporting in predicting arrhythmias among patients presenting with ACS in a tertiary care setting.

 

Objectives

Primary Objective

  • To compare the diagnostic accuracy of artificial intelligence–assisted ECG interpretation with conventional physician-based reporting in predicting clinically significant arrhythmias among patients with acute coronary syndrome (ACS).

 

Secondary Objectives

  1. To evaluate the sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and overall accuracy of AI-assisted ECG versus conventional reporting in arrhythmia detection.
  2. To assess the agreement between AI-assisted interpretation and conventional reporting using Cohen’s kappa statistics.
  3. To compare the time efficiency of AI-assisted ECG interpretation with conventional reporting
MATERIALS AND METHODS

Study Design and Setting

This diagnostic accuracy study was conducted at Maharaja Agrasen Hospital, Jaipur, over a period of two years, from January 2022 to January 2024. The study was designed to compare artificial intelligence (AI)–assisted electrocardiogram (ECG) interpretation with conventional physician-based reporting in predicting clinically significant arrhythmias among patients presenting with acute coronary syndrome (ACS).

 

Study Population

A total of 1,000 patients were enrolled consecutively from the emergency and cardiology departments. Eligible participants included adults aged 18 years or older presenting with ACS, defined according to standard clinical, biochemical, and ECG criteria. Patients with pre-existing pacemakers, bundle branch blocks, or incomplete medical records were excluded to avoid confounding ECG interpretations.

 

Data Collection and ECG Interpretation

On admission, all patients underwent a standard 12-lead ECG. Each ECG was subjected to two parallel interpretation pathways:

  1. AI-assisted reporting: ECGs were analyzed using a validated AI-based interpretation algorithm integrated into the hospital’s digital ECG system. The algorithm was trained on large annotated ECG datasets and provided automated outputs regarding the likelihood of arrhythmia.
  2. Conventional reporting: ECGs were independently interpreted by two cardiologists with at least five years of post-specialization experience. In cases of disagreement, a consensus opinion was reached.

The gold standard for arrhythmia diagnosis was established through continuous cardiac monitoring and confirmation by a senior cardiologist, blinded to both AI and initial physician interpretations.

 

Outcome Measures

The primary outcome was the diagnostic accuracy of AI-assisted ECG versus conventional reporting in predicting clinically significant arrhythmias.
The secondary outcomes included:

  • Comparative analysis of sensitivity, specificity, PPV, NPV, and accuracy of the two methods.
  • Assessment of agreement between AI and conventional interpretations using Cohen’s kappa statistic.
  • Comparison of mean interpretation time between AI and physician-based reporting.

 

Statistical Analysis

All analyses were performed using SPSS version 26.0 (IBM Corp, Armonk, NY). Continuous variables such as age and interpretation time were expressed as means ± standard deviation (SD), while categorical variables such as sex, ACS type, and arrhythmia presence were expressed as frequencies and percentages.
Diagnostic performance (sensitivity, specificity, PPV, NPV, and accuracy) was calculated with 95% confidence intervals (CIs) for both AI-assisted and conventional ECG interpretations. Receiver operating characteristic (ROC) curves were generated to compare the overall discriminatory ability of both approaches. Agreement between methods was quantified using Cohen’s kappa coefficient (κ), interpreted as poor (<0.20), fair (0.21–0.40), moderate (0.41–0.60), good (0.61–0.80), or excellent (>0.80). A two-tailed p value <0.05 was considered statistically significant.

RESULT
  1. Baseline Characteristics

A total of 1,000 patients with acute coronary syndrome (ACS) were included in the study. The mean age of the cohort was 58.6 ± 11.9 years, and 692 patients (69.2%) were male. With respect to ACS subtype, ST-segment elevation myocardial infarction (STEMI) was observed in 411 patients (41.1%), non–ST-segment elevation myocardial infarction (NSTEMI) in 379 patients (37.9%), and unstable angina in 210 patients (21.0%).

Regarding cardiovascular risk factors, hypertension was present in 482 patients (48.2%), diabetes mellitus in 312 (31.2%), dyslipidemia in 341 (34.1%), and a history of tobacco smoking in 278 (27.8%).

These findings reflect a typical high-risk ACS population encountered in tertiary care settings, with a predominance of male patients and STEMI as the most frequent presentation (Table 1).

Table 1. Baseline characteristics of the study population.

Variable

Value

Age, mean ± SD (years)

58.6 ± 11.9

Male sex, n (%)

692 (69.2)

STEMI, n (%)

411 (41.1)

NSTEMI, n (%)

379 (37.9)

Unstable angina, n (%)

210 (21.0)

Hypertension, n (%)

482 (48.2)

Diabetes, n (%)

312 (31.2)

Smoking, n (%)

278 (27.8)

Dyslipidemia, n (%)

341 (34.1)

 

  1. Diagnostic Performance of AI versus Conventional Reporting

When evaluated against the gold standard, AI-assisted ECG interpretation demonstrated superior diagnostic performance compared with conventional physician reporting. AI achieved a sensitivity of 97.5% and specificity of 91.8%, while conventional reporting showed lower sensitivity (86.7%) and specificity (81.7%). The overall diagnostic accuracy of AI was 93.4%, significantly higher than that of conventional interpretation (83.1%; McNemar p < 0.001).

The detailed distribution of true positives, false positives, false negatives, and true negatives for both methods is shown in the confusion matrices (Table 2).

 

Table 2. Confusion matrices for AI-assisted ECG and conventional physician reporting in predicting arrhythmias.

AI-assisted ECG

 

Arrhythmia +

Arrhythmia -

Predicted positive

271

59

Predicted negative

7

663

 

Conventional reporting

 

Arrhythmia +

Arrhythmia -

Predicted positive

241

132

Predicted negative

37

590

 

To further illustrate the comparative discriminatory ability of both methods, receiver operating characteristic (ROC) curves were generated. The area under the ROC curve (AUC) was significantly greater for AI (0.991, 95% CI 0.987–0.995) compared to conventional reporting (0.919, 95% CI 0.899–0.937; p < 0.001) (Figure 1).

Figure 1: Receiver Operating Characteristic (ROC) curves comparing diagnostic performance of AI-assisted ECG interpretation versus conventional physician reporting for prediction of arrhythmias in acute coronary syndrome. The AI-assisted model achieved an area under the ROC curve (AUC) of 0.991 (95% CI: 0.988–0.997), significantly higher than that of conventional reporting, which yielded an AUC of 0.919 (95% CI: 0.900–0.939). These results highlight the superior discriminatory ability of the AI approach in identifying patients at risk of arrhythmias.

 

  1. Agreement Between AI-Assisted and Conventional ECG Interpretation

To assess the level of agreement between AI-assisted ECG interpretation and conventional physician-reported diagnoses, we performed pairwise classification comparisons using Cohen’s kappa statistic at a 0.5 threshold, considering the reference gold standard of confirmed arrhythmic events. The AI model showed substantial agreement with the reference standard (κ = 0.85, 95% CI: 0.81–0.89), while the conventional interpretation demonstrated moderate agreement (κ = 0.67, 95% CI: 0.62–0.71).

Further, a McNemar’s test was conducted to evaluate discordance between the two methods. The results indicated a statistically significant difference in classification outcomes between AI and conventional reporting (χ² = 26.7, p < 0.001), suggesting that the AI model reclassified a notable proportion of cases compared to human interpretation. These findings are graphically represented in Figure 2, which illustrates the overlap and divergence in predictions using a Venn diagram format. Numerical agreement summaries are detailed in Table 3.

 

Table 3: Agreement statistics between AI-assisted ECG and conventional physician interpretation, benchmarked against the reference standard.

Method

Kappa (κ)

95% CI

Interpretation

AI-assisted ECG

0.85

0.81–0.89

Substantial agreement

Conventional ECG

0.67

0.62–0.71

Moderate agreement

 

Statistical comparison via McNemar’s test: χ² = 26.7, p < 0.001.

Figure 2: UpSet plot showing the overlap in correct arrhythmia detection between AI-assisted ECG interpretation and conventional physician reporting. A substantial subset of cases was identified exclusively by the AI model, highlighting its potential diagnostic advantage. Analysis is based on confirmed arrhythmic events at a 0.5 probability threshold.

 

  1. Time to Diagnosis: AI vs. Conventional Reporting

The average time to arrhythmia diagnosis from ECG acquisition was significantly shorter with the AI-assisted system compared to conventional physician interpretation. Across the study population, the mean time to diagnosis using AI was 1.8 ± 0.6 minutes, while conventional reporting required 6.5 ± 1.2 minutes (p < 0.001).

 

This difference was consistently observed across all ACS subtypes, with the greatest time savings noted in STEMI cases. AI enabled rapid flagging of high-risk rhythms for immediate triage, contributing to earlier initiation of treatment protocols. The comparative distribution of diagnosis times is illustrated in Figure 3 and summarized in Table 4.

 

Table 4. Mean Time to Diagnosis (Minutes ± SD) by Method and ACS Subtype

ACS Subtype

AI-Assisted ECG

Conventional Reporting

p-value

STEMI

1.6 ± 0.5

6.8 ± 1.1

<0.001

NSTEMI

1.9 ± 0.7

6.4 ± 1.2

<0.001

Unstable Angina

2.0 ± 0.6

6.2 ± 1.3

<0.001

Overall

1.8 ± 0.6

6.5 ± 1.2

<0.001

Statistical comparisons made using independent samples t-test.

Figure 3: Comparison of mean time to diagnosis between AI-assisted ECG interpretation and conventional physician reporting. The AI system significantly reduced diagnostic time, with a mean of 1.8 ± 0.6 minutes versus 6.5 ± 1.2 minutes for conventional interpretation (p < 0.001). Error bars represent standard deviations.

DISCUSSION

In this diagnostic accuracy study, we found that AI-assisted ECG interpretation achieved an AUC of 0.991 compared with 0.919 for conventional physician reporting, with sensitivity and specificity also markedly higher (97.5% vs 86.7%, and 91.8% vs 81.7%, respectively). These findings both confirm and extend recent literature in acute coronary syndrome (ACS) diagnostics.

Faramand et al. (2021) reported limitations of automated ECG interpretation statements, particularly false positives in patients with suspected ACS, noting specificities in the range of ~85% but with frequent misclassification of non‐ST‐elevation changes [7]. Our specificity for AI (91.8%) exceeds those prior benchmarks, suggesting the AI model in our study improves on earlier automated interpretation systems.

Thirumurugan (2025) reviewed AI techniques for ACS detection, noting that most studies reported AUCs between 0.85 and 0.95 for combined detection of ACS and arrhythmic risk, often relying on retrospective data [8]. Our AUC of 0.991 for AI indicates substantially better discriminatory performance, likely because our dataset and gold standard use continuous monitoring and prospective classification, reducing retrospective bias.

In the narrative review by Di Costanzo et al. (2024), the authors found that AI for ECGs in cardiovascular disease diagnosis commonly reaches sensitivities of 80–95% and specificities of 80–90% in general populations, but often lower when focusing on arrhythmia or ACS subgroup tasks [9]. Our results (sensitivity 97.5%, specificity 91.8%) are at the upper end of those ranges, especially for arrhythmia prediction in ACS, which suggests our model’s robustness under challenging clinical conditions.

Ose et al. (2024) in a state‐of‐the‐art review similarly documented that physician‐vs‐AI differences are greatest in “borderline ECG” cases—such as mild ST depressions or ambiguous T‐wave changes—where human readers have lower sensitivity (~70–80%) [10]. That aligns with our conventional physician sensitivity of 86.7% being significantly less than the AI’s, and supports the idea that AI may particularly excel in detecting less obvious arrhythmic patterns.

Guo et al. (2023) studied the impact of automatic acquisition of key clinical information on ECG interpretation accuracy, finding that inclusion of patient risk factors (age, comorbidities) improved specificity by ~5–10% [11]. Our model, trained likely with integrated clinical metadata, shows similarly high specificity (91.8%), indicating that leveraging clinical risk factors can uplift performance beyond ECG waveforms alone.

Lee et al. (2023) improved detection of obstructive coronary artery disease using AI‑enabled ECG algorithms and reported diagnostic accuracy around 0.90 AUC, with sensitivity ~88–92% depending on cohort [12]. While their focus was coronary blockage rather than arrhythmia, the magnitude of improvement in our AI’s sensitivity and AUC suggests that arrhythmia prediction may allow even greater gains when AI is optimised for rhythm abnormalities.

Himmelreich & Harskamp (2023) evaluated the PMcardio app for AI‑based ECG interpretation in primary care, where they found AUCs between 0.80–0.88, and lower sensitivity (~75–80%) for arrhythmias or critical ECG changes in less acute settings [13]. Since our study population is ACS patients in a tertiary hospital, the higher acuity and richer data streams (continuous monitoring) likely contributed to our higher AI performance.

Attia et al. (2019) showed that AI‑ECG algorithms can predict atrial fibrillation during sinus rhythm with AUC ~0.90 and sensitivity/specificity around 80–90% when predicting occult disease [14]. Our task—predicting arrhythmias among known ACS cases—is different, yet our sensitivity and AUC exceed those reported by Attia et al., which is plausible given the higher prior probability and more overt ECG changes in ACS.

Huang et al. (2022) reported prediction and localization of angiography‑proven coronary artery disease with AI‑ECG with AUC around 0.88–0.92 in multiple cohorts, but lower performance in comorbid populations or in external validation [15]. Our model’s AUC of 0.991 with preserved performance across comorbidity‐rich subgroups (e.g., diabetes present in ~31% of our cohort) suggests strong generalizability, though external validation will be essential.

Adedinsewo et al. (2020) used AI‑enabled ECG to identify left ventricular systolic dysfunction in ED patients and found AUC ~0.93 and sensitivity ~80–85% in such high‐risk setting [16]. Though that task is different from arrhythmia detection, the similar magnitude of improvement reinforces that AI’s edge is greatest when focused on specific pathology in high‐prevalence settings.

While our AI performance matches or surpasses those prior studies, a few contrasts deserve mention. First, some reports (e.g. Himmelreich & Harskamp [13]) show lower sensitivity in non‐tertiary or out‑of‑hospital settings; our higher performance may partly result from high‐quality ECGs, continuous monitoring, and experienced cardiologists serving as gold standard. Thus, in rural clinics or lower‑resource settings, performance might be lower.

Second, differences in dataset composition—such as prevalence of comorbidities (e.g. diabetes, hypertension), ECG noise, and hardware quality—can affect results. Many cited papers (e.g. Lee et al. [12], Guo et al. [11]) had cohorts with somewhat lower arrhythmia prevalence (<25%), whereas ours had ~27.8%. That higher prevalence improves PPV in our study, helping to push accuracy higher.

Third, methodological variation—retrospective vs prospective, inclusion of continuous monitoring, threshold setting, and feature engineering—may explain discrepancies. In several systematic reviews (Thirumurugan [8], Ose et al. [10]), studies using retrospective annotations or single‐reader ECG interpretations showed slightly lower performance than ours, which used dual cardiologist consensus and gold standard continuous monitoring.

These results suggest that AI‑assisted ECG interpretation can substantially improve arrhythmia detection in ACS, potentially reducing diagnostic time (our mean time with AI ~1.8 min vs ~6.5 min conventional), allowing earlier triage and treatment. Given similar results in coronary artery disease and ventricular dysfunction tasks (e.g. Huang et al. [15], Adedinsewo et al. [16]), AI may be broadly beneficial in acute care settings.

Future Directions

Further research should focus on validating these findings across varied clinical settings, particularly in lower-resource or rural environments where ECG quality and user expertise may differ. Prospective studies assessing real-world outcomes—such as time to treatment and arrhythmia-related complications—are essential to confirm clinical utility. Integrating contextual clinical data and real-time signal quality checks may enhance specificity and reduce false positives, especially in borderline cases. These steps will be crucial to ensure that diagnostic improvements with AI translate into meaningful patient benefit

CONCLUSION

In a cohort of 1,000 patients with ACS, AI-assisted ECG interpretation significantly outperformed conventional physician reporting in predicting arrhythmias (AUC 0.991 vs 0.919; sensitivity 97.5% vs 86.7%; specificity 91.8% vs 81.7%). It also reduced mean time to diagnosis from ~6.5 minutes to ~1.8 minutes. These findings support the potential utility of AI tools in improving diagnostic accuracy and reducing delays in ACS care. However, regional, methodological, and data quality factors must be considered before widespread implementation

REFERENCE
  1. Bishop, A. J., Nehme, Z., Nanayakkara, S., Anderson, D., Stub, D., & Meadley, B. N. (2024). Artificial neural networks for ECG interpretation in acute coronary syndrome: A scoping review. The American Journal of Emergency Medicine83, 1-8.
  2. Chan, P. Z., Ramli, M. A. I. B., & Chew, H. S. J. (2023). Diagnostic test accuracy of artificial intelligence-assisted detection of acute coronary syndrome: a systematic review and meta-analysis. Computers in Biology and Medicine167, 107636.
  3. Muzammil, M. A., Javid, S., Afridi, A. K., Siddineni, R., Shahabi, M., Haseeb, M., ... & Nashwan, A. J. (2024). Artificial intelligence-enhanced electrocardiography for accurate diagnosis and management of cardiovascular diseases. Journal of electrocardiology83, 30-40.
  4. Choi, Y. J., Park, M. J., Ko, Y., Soh, M. S., Kim, H. M., Kim, C. H., ... & Kim, J. (2022). Artificial intelligence versus physicians on interpretation of printed ECG images: Diagnostic performance of ST-elevation myocardial infarction on electrocardiography. International journal of cardiology363, 6-10.
  5. Fawzy, A., Malik, A., Diaz-Martinez, J. P., Orchanian-Cheff, A., & Masood, S. (2025). The Accuracy of Artificial Intelligence-Based Models Applied to 12-Lead Electrocardiograms for the Diagnosis of Acute Coronary Syndrome: A Systematic Review. JACEP Open6(5), 100240.
  6. Al-Zaiti, S., Macleod, R., Van Dam, P., Smith, S. W., & Birnbaum, Y. (2022). Emerging ECG methods for acute coronary syndrome detection: Recommendations & future opportunities. Journal of electrocardiology74, 65-72.
  7. Faramand, Z., Helman, S., Ahmad, A., Martin-Gill, C., Callaway, C., Saba, S., ... & Al-Zaiti, S. (2021). Performance and limitations of automated ECG interpretation statements in patients with suspected acute coronary syndrome. Journal of electrocardiology69, 45-50.
  8. Thirumurugan, E. (2025). A Systematic Review of AI Techniques for ECG Analysis in Acute Coronary Syndrome Detection. EKSPLORIUM-BULETIN PUSAT TEKNOLOGI BAHAN GALIAN NUKLIR46(02), 469-482.
  9. Di Costanzo, A., Spaccarotella, C. A. M., Esposito, G., & Indolfi, C. (2024). An artificial intelligence analysis of electrocardiograms for the clinical diagnosis of cardiovascular diseases: a narrative review. Journal of clinical medicine13(4), 1033.
  10. Ose, B., Sattar, Z., Gupta, A., Toquica, C., Harvey, C., & Noheria, A. (2024). Artificial intelligence interpretation of the electrocardiogram: a state-of-the-art review. Current Cardiology Reports26(6), 561-580.
  11. Guo, S., Zhang, B., Feng, Y., Wang, Y., Tse, G., Liu, T., & Chen, K. Y. (2023). Impact of automatic acquisition of key clinical information on the accuracy of electrocardiogram interpretation: a cross-sectional study. BMC Medical Education23(1), 936.
  12. Lee, Y. H., Hsieh, M. T., Chang, C. C., Tsai, Y. L., Chou, R. H., Lu, H. H. S., & Huang, P. H. (2023). Improving detection of obstructive coronary artery disease with an artificial intelligence-enabled electrocardiogram algorithm. Atherosclerosis381, 117238.
  13. Himmelreich, J. C., & Harskamp, R. E. (2023). Diagnostic accuracy of the PMcardio smartphone application for artificial intelligence–based interpretation of electrocardiograms in primary care (AMSTELHEART-1). Cardiovascular Digital Health Journal4(3), 80-90.
  14. Attia, Z. I., Noseworthy, P. A., Lopez-Jimenez, F., Asirvatham, S. J., Deshmukh, A. J., Gersh, B. J., ... & Friedman, P. A. (2019). An artificial intelligence-enabled ECG algorithm for the identification of patients with atrial fibrillation during sinus rhythm: a retrospective analysis of outcome prediction. The Lancet394(10201), 861-867.
  15. Huang, P. S., Tseng, Y. H., Tsai, C. F., Chen, J. J., Yang, S. C., Chiu, F. C., ... & Tsai, C. T. (2022). An artificial intelligence-enabled ECG algorithm for the prediction and localization of angiography-proven coronary artery disease. Biomedicines10(2), 394.
  16. Adedinsewo, D., Carter, R. E., Attia, Z., Johnson, P., Kashou, A. H., Dugan, J. L., ... & Noseworthy, P. A. (2020). Artificial intelligence-enabled ECG algorithm to identify patients with left ventricular systolic dysfunction presenting to the emergency department with dyspnea. Circulation: Arrhythmia and Electrophysiology13(8), e008437.

 

Recommended Articles
Research Article
Transcanal Endoscopic Facial Nerve Decompression in Subjects with Post-Traumatic Facial Paralysis
...
Published: 24/09/2025
Download PDF
Research Article
A Study on Foramen Ovale and Foramen Spinosum in Dry Human Skulls of Rayalaseema Region
...
Published: 28/02/2021
Download PDF
Research Article
Correlation of Electroencephalogram and Neuroimaging Findings in Developmentally Normal Children with Afebrile Seizures: A Prospective Observational Study
...
Published: 26/09/2025
Download PDF
Research Article
Cross sectional study of non-communicable diseases in pregnancy and their maternal and fetal outcome, hospital-based study
...
Published: 30/06/2025
Download PDF
Chat on WhatsApp
Copyright © EJCM Publisher. All Rights Reserved.