demo_content_case_3A_link_1
The signature before the strip — reading the six-week trajectory of an arrhythmia event
Most clinically catastrophic arrhythmias are not events. They are tails of trajectories that have been visible, on the right monitoring substrate, for weeks. The clinical question that decides outcome is no longer “what is the rhythm now” — it is “what was the trajectory that produced it, and could it have been read in time”.
A young patient with no cardiac history arrives in an emergency department in pre-excited atrial fibrillation. The team performs the rhythm-recognition cascade well, the right drug list opens, the patient is converted, and the discharge summary describes the encounter as “first presentation of pre-excited atrial fibrillation in a previously healthy patient with an undiagnosed accessory pathway.” Some version of that sentence is written, every week, in a non-trivial number of emergency departments around the world. The phrase “first presentation” is doing a great deal of work in those sentences. It is true in the sense that this is the first time the rhythm has reached a monitor. It is almost certainly false in the sense that the substrate generating the rhythm announced its readiness, in measurable ways, for weeks before the event. The clinical question that this article is concerned with is not how those teams handle the events. It is whether anything could have been read, in the weeks before, that would have changed those encounters from resuscitations into elective referrals.
A growing body of continuous-monitoring data, drawn from outpatient patches, implantable monitors, and consumer-grade wearables, suggests that the answer to that question is non-trivially “yes” for an increasing fraction of arrhythmia presentations. The mechanism is not mysterious. The substrates that produce sustained arrhythmia — an accessory pathway with a short refractory period, a left ventricular scar with re-entry potential, a ventricular hypertrophy under inadequate beta-blockade, an electrolyte trend that crosses a critical threshold — are not silent in the days and weeks before the event. They generate premonitory signatures: brief, self-terminating runs of the same rhythm that will eventually fail to self-terminate; subtle shifts in heart-rate variability, P-wave morphology, repolarisation gradient, autonomic tone; and, crucially, biochemical drift in the electrolyte and drug-level milieu within which the substrate operates. None of these signatures is reliably detectable on a 12-lead ECG performed in clinic two weeks before the event. All of them are detectable on a continuously sampled monitoring substrate, if one is in place.
The trajectory, not the strip
The reframe at the centre of this discussion is small but consequential. For most of the history of clinical electrophysiology, the unit of diagnosis has been the rhythm strip — a snapshot, a slice of time during which the rhythm is visible, captured by a monitor that happens to be present at the moment the rhythm exists. This unit has done remarkable work for clinical decision-making, but it has a structural property worth examining: it is sampled at the moment of the event, and almost never before. The clinician who reads it has access to the worst version of the rhythm and to nothing of the trajectory that produced it. The decision the clinician then makes — pharmacologic, procedural, prognostic — is conditioned on this single late slice, augmented by whatever historical 12-lead tracings happen to exist in the chart.
Continuous monitoring substrates change the unit of diagnosis. The strip is no longer the indexing object. The trajectory is. The same patient now produces a signal that begins not at the moment of the event but weeks earlier, and the question the clinician is asked to answer is no longer “what does this strip show” but “what does this trajectory show, and where on the trajectory is the patient now”. The shift is subtle in framing and large in consequence. A patient with a self-terminating run of a rhythm that will, six weeks later, fail to self-terminate is a different clinical entity from a patient whose rhythm appears on a monitor for the first time during the event itself. The former is an elective referral, scheduled into a procedural slot at the convenience of the patient and the centre. The latter is a resuscitation, scheduled at the convenience of nothing.
Figure 1 · Six-week risk-index trajectory leading to an arrhythmia event Schematic representation. The continuous trajectory (gradient line) records premonitory signatures — brief self-terminating runs at day -38, a longer run with electrolyte drift at day -19, and beat-clustering with autonomic shift at day -6 — that cross the elective-referral threshold weeks before the event. The dashed baseline represents the same patient on conventional intermittent care: one unremarkable 12-lead in clinic, then nothing until the resuscitation. Stylized for educational purposes.
What a continuous monitoring substrate actually sees
The phrase “premonitory signature” can sound speculative when stated abstractly. It is worth being concrete about what a continuous monitoring substrate, sampling at the rates and channels that current technology supports, actually records during the weeks leading up to a sustained arrhythmia. Several distinct signal classes contribute, none individually decisive but jointly informative.
The first class is self-terminating dysrhythmic episodes: brief runs of the rhythm that will eventually fail to terminate, lasting seconds to minutes, occurring with increasing frequency and duration in the weeks before the event. These are the most direct premonitory signal and are routinely captured by current outpatient monitors when one is in place. The challenge is that the same self-terminating run, in a patient without other risk markers, is not a procedural indication on its own; it becomes one only when integrated with the trajectory.
The second class is autonomic and electrophysiologic drift: heart-rate variability narrowing, repolarisation lability, P-wave morphologic shift, and circadian disorganisation. None of these is a diagnosis. Each is a slow-moving variable whose trajectory, sampled densely over weeks, distinguishes patients whose substrate is stable from patients whose substrate is destabilising. The shape of the drift, not the magnitude at any instant, carries the predictive signal.
The third class — the one that has only recently become technically accessible at the patient level — is biochemical milieu: trends in serum potassium, magnesium, calcium, blood pH, and circulating levels of antiarrhythmic and anticoagulant drugs at the doses they are actually being taken at. Most arrhythmia substrates are exquisitely sensitive to this milieu. A patient whose accessory pathway has a borderline refractory period at normal electrolytes is a different patient from the same anatomy in a slowly trending hypokalaemic state — and the difference, on conventional point-of-care chemistry sampled twice a year in clinic, is invisible. On a continuously sampled biochemical channel, it is the dominant variable.
The integration problem
None of these signal classes, taken alone, would justify the cost or the patient burden of continuous monitoring as a screening tool in a healthy population. The case for monitoring is built on integration. A risk-trajectory model that combines a low frequency of self-terminating runs, with a trajectory of autonomic drift, with a slow electrolyte trend, can generate a composite risk index whose behaviour over weeks is meaningfully predictive of the event — in a way no single channel sampled at any single time would be. The integration is what converts a population of weakly informative signals into a clinically actionable trajectory.
This integration is also where the proprietary content of any monitoring product lives. The signals themselves are largely public, the algorithms that combine them are not. The honest claim that this article wants to make is the framework one: the trajectory is real, the signatures are real, the integration is technically tractable, and the question of whether a given monitoring substrate captures them well enough to change clinical decisions is now an empirical question rather than a theoretical one. Different monitoring substrates answer that empirical question differently. Some answer it well enough to change the indication landscape. Some do not.
What changes when the trajectory is read in time
The downstream effect of reading the trajectory rather than only the strip is not pharmacologic. The drug list at the moment of the event is the same drug list. What changes is the clinical pathway around it. A patient whose risk-index trajectory crosses the elective-referral threshold at day -19, on a substrate that flags the crossing to the responsible clinician, becomes a candidate for procedural evaluation in a scheduled outpatient setting. The substrate is identified and characterised before the event. The procedure — an accessory-pathway ablation, a scar-substrate ablation, a device implantation — is performed under controlled conditions, with full diagnostic workup, with informed consent collected at leisure rather than between intubation attempts. The patient who would otherwise have arrived in a resuscitation bay arrives instead in a procedure suite, several weeks earlier, on the patient’s calendar rather than the rhythm’s.
The shift this produces in the population sense is not a reduction in arrhythmia incidence. The substrate exists either way. It is a redistribution of where the substrate is identified and treated — from the emergency presentation, where outcomes are stochastic and resource-intensive, to the elective referral, where outcomes are predictable and resource-efficient. This redistribution is the central clinical and health-economic claim of risk-trajectory monitoring as a category.
Figure 2 · Schematic redistribution of clinical contact across the two pathways Stylized representation. The substrate is identical between rows; what changes is whether the trajectory generated by that substrate is recorded and read in time to redirect the patient from an unscheduled emergency presentation to a scheduled procedural pathway.
The biochemical channel — why it matters
Among the signal classes that a continuous monitoring substrate can record, the biochemical milieu deserves separate treatment because, until recently, it has not been technically accessible at the patient level outside of a clinic visit. Conventional outpatient monitoring captures rhythm and rhythm-derived signals well, autonomic signals adequately, and biochemistry essentially not at all. The trajectory of an arrhythmia substrate that is stable at normal electrolytes and unstable at a slowly drifting hypokalaemic baseline cannot be read on a rhythm-only monitor; the rhythm-only monitor records, accurately, that the rhythm is currently sinus, and misses entirely the slow-moving variable that is loading the substrate toward instability.
The technical advance that has changed this picture is the development of noninvasive transcutaneous spectroscopy — specifically, combinations of Raman and impedance methods sampled at the skin surface, calibrated against population-level chemistries — that produce continuous estimates of serum potassium, magnesium, calcium, blood pH, and circulating levels of monitored antiarrhythmic and anticoagulant drugs. The estimates are not laboratory-grade in absolute precision, and any responsible deployment of the technology has to be honest about that limitation. They are, however, sufficient for trajectory analysis: the slow-moving drift of an electrolyte over days is detected with much higher confidence than the absolute value at any instant. For trajectory-based clinical decisions, this is exactly the right precision profile. Diagnostic-grade absolute values continue to live in the laboratory; trajectory-grade slope estimates live on the patient.
The clinical implication is that the integration problem now has a fifth signal class to draw from. A risk-trajectory model that combines rhythm signals, autonomic drift, biochemical milieu trend, hemodynamic variation, and trans-thoracic impedance is operating with a substantially richer feature space than any monitoring approach available even five years ago. Whether the additional signal classes change clinical decisions in any individual case is, again, an empirical question. The framework claim is that the channels exist, are technically integrable, and have begun to inform clinical pathways at centres that have deployed continuous biochemical sampling at scale.
Where this changes clinical practice
The clinical practice consequences of taking trajectory monitoring seriously are not yet fully written, but the early shape of the change is visible. The most immediate effect is on the indication landscape for the procedural therapies that the events would otherwise have triggered — ablations, device implantations, anticoagulation initiations — which begin to be performed in elective settings on patients identified by trajectory rather than in emergency settings on patients identified by event. The second effect is on clinic-visit utility. A 12-lead tracing performed at a six-month interval has limited prognostic content for an arrhythmia substrate that is destabilising on a multi-week timescale; the same clinic visit, with a download of the prior six weeks of trajectory data, has substantially higher content. The visit becomes a moment of trajectory review rather than of point-snapshot acquisition.
The third effect, and the most consequential at population scale, is on the structure of arrhythmia care itself. Current health systems are organised around the event: emergency departments are the locus of arrhythmia identification, electrophysiology referrals are downstream of presentations, and population-level prevention has, in practice, been limited to risk-factor modification (blood pressure, weight, alcohol, sleep apnoea screening) on time-scales that do not match the substrate timescales. A trajectory-monitored care pathway shifts the locus of identification upstream of the event, to the outpatient monitoring substrate, with corresponding effects on referral patterns, procedural scheduling, and the role of the emergency department in arrhythmia care. The shift is not all-or-nothing; events still occur on patients without monitoring and on patients whose monitoring did not catch the trajectory in time. But the population fraction of arrhythmia presentations that originate from continuously monitored patients is rising, and the clinical pathways for that fraction are diverging from the conventional emergency-first pathway in instructive ways.
The strip is the late slice of a six-week story. Reading the slice is necessary, and clinicians do it well. Reading the story is what changes whether the slice is the one in which the patient is in the resuscitation bay or the one in which the patient is consenting to an elective procedure on a Tuesday morning. The signal is not new. The substrate that records it is.
Editorial commentary — on the reframe from event to trajectory in continuous-monitoring electrophysiology
The narrow case for taking this seriously now
A reasonable reader who has reached this point in the article may grant the framework but reasonably ask what has changed in 2026 that it should not have been said five years ago. Two things have changed. The first is that the biochemical channel has crossed from research-grade to bedside-deployable. Until recently, the trajectory framework was rhythm-and-autonomics-only at the patient level, and the biochemical milieu had to be sampled by clinic chemistries. The integration has become tractable at the patient level only with the maturation of transcutaneous spectroscopy. The second is that continuous-monitoring substrates have begun to produce sufficient longitudinal cohort data that the relationship between trajectory features and clinical events is being characterised quantitatively rather than only mechanistically. The framework has moved from “this should work” to “this is being shown to work, in cohorts of meaningful size, on the timescales the substrate predicts.”
None of this argues for population-level deployment of continuous monitoring as a screening tool in healthy adults. The case is narrower and more clinical: in patients whose history places them at non-trivial substrate risk — pre-excited ECG patterns, recurrent unexplained syncope, post-ablation surveillance, anti-arrhythmic dose titration with narrow therapeutic windows — the trajectory framework has specific clinical content that the strip-only framework does not, and the monitoring substrates that capture trajectory adequately are now available. The clinical question moves, in this population, from “should we monitor” to “which monitoring substrate captures the trajectory in a way that changes our decisions, and at what cost”.
What this article has tried to do is name the framework and locate the change. The framework is trajectory rather than event. The change is that the technical substrate to read the trajectory now exists at the patient level, in a form that includes the biochemical channel that was missing in earlier monitoring generations. Different products implement the framework with different completeness, different cost profiles, and different evidence bases. This article does not adjudicate among them. It argues that the framework is the right one, that it has begun to change clinical pathways at the centres deploying it well, and that practising clinicians benefit from carrying it as a way of thinking about arrhythmia presentations whether or not they have any specific monitoring product in their hands.
integrated into a single
trajectory index
window before a
sustained event
sampling interval on
continuous substrate