Proceed only if you can tolerate direct and uncompromized opinions.
Please note, “Considered Harmful” Essays Considered Harmful. I invite you to read this essay on the topic.

Redux considered harmful

Or living in a buble

1. Introduction

For years, I have observed a consistent pattern: as applications built with Redux grow, their maintainability degrades. Steadily but surely. The larger the codebase, the more pronounced the decay. State management becomes convoluted, testing brittle, and simple changes require touching multiple files. Workarounds multiply, or entire slices of state quietly migrate outside Redux altogether.

Yet the architecture itself is rarely questioned. Developers accept Redux as standard practice - a kind of faith that “this is how state management is done”. When difficulties arise, the typical response is to add more Redux: more middleware, more elaborate patterns, more layers of abstraction. The possibility that Redux itself is the root of the complexity rarely enters the discussion. Instead problems are attributed to insufficient mastery rather than to the tool’s architecture.

This pattern is too consistent, across too many teams and codebases, to dismiss as coincidence or developer error. This is not the fault of developers who “don’t understand Redux properly”. It suggests something deeper - a structural flaw in the framework’s design.

My conclusion, after years of working with Redux, is that the problem lies in Redux Toolkit’s architectural constraints. They appear elegant and grounded in solid principles, but beneath the surface they enforce assumptions that conflict with the realities of modern application development. The problems are not bugs to be fixed or patterns to be learned. They are the inevitable consequences of foundational design choices.

It’s hard to say put a finger on the one signel thing which made Redux unpleasant to work with. Redux’s pain points emerge from a network of interdependent decisions - but the result is clear: a model that works against the real-world software engineering.

This is not a matter of taste or trend. It is an engineering question: does Redux’s architecture serve the actual needs of web application development?

My answer is no. Redux served an important historical purpose, it codified early ideas about state management and taught the community valuable lessons, but those lessons now point in the opposite direction. It is time to retire Redux Toolkit from production systems and move toward architectures that respect encapsulation, composition, simplicity and predictability - fundamental software engineering principles.

2. The State Management

It’s good to set some background and understand what state management actually is. Not what Redux says it is. But what the problem of managing state actually looks like in React applications.

Ephemeral vs Persistent State

In practice, React applications have two broad kinds of state:

  • Ephemeral (UI) state. Tied to the component lifecycle, created via useState, vanishing when the component unmounts.
  • Persistent (application) state. Data the user expects to survive unmounting: selections, filters, form inputs, authentication.
Other state

There are other types of state. Derived state (computed from props or selectors), remote state (from APIs or as it is in a database), cached state (e.g., SWR, React Query) and browser state. But that’s only tangently relevant here, we will come back to it later.

Confusing these two leads to subtle bugs. When persistence scope doesn’t align with user expectation, information disappears unexpectedly - as in a tabbed interface where expanded tree nodes collapse after tab changes. Component state is rendering state. Application data must live beyond render lifecycles. Persistent doesn’t mean it persist across sessions. It is the state that persists across renders and lifecycles, not necessarily persisted to storage. As a principle the persistence scope should match user expectation, not component lifecycle.

Rendering state works great for truly ephemeral things, like hover states, loading spinners, basically state of the UI we are fine to lose and can easily recreate once component renders. But it breaks down when you need state to survive unmounting. Here’s a the scenario to illustrate the point:

// Team A: Building TreeView
function TreeView({ data }) {
  const [expanded, setExpanded] = useState(new Set());
  const [selected, setSelected] = useState(null);

  return <Tree nodes={data} expanded={expanded} ... />;
}

// Team B: Building TabPanel (somewhere up the tree)
function TabPanel({ activeTab }) {
  return (
    <>
      {activeTab === "overview" && <OverviewTab />}
      {activeTab === "details" && <DetailsTab />} // TreeView lives here
      {activeTab === "settings" && <SettingsTab />}
    </>
  );
}

User expands a bunch of nodes in the tree view, switches tabs, comes back. All their expansions are gone. The tree is collapsed again. They had no way to know this would happen, and they’re rightfully annoyed. Team A manages the local state fo the tree. Team B uses normal React patterns. Nobody did anything wrong, but the interaction produces broken behavior.

Activity component

React 19’s <Activity> component provides a solution for this specific scenario. Instead of conditional rendering that unmounts components, Activity hides them using CSS (display: none) while preserving their internal state. When made visible again, the state is restored. This avoids the problem in the tabs example, though the broader principle remains: there are many ways components can unmount unexpectedly-routing changes, modal dismissals or parent component logic you don’t control.

The issue is fundamental: component state is rendering state. It’s for ephemeral UI needs-things that make sense to lose when you can’t see the component anymore. When you try to use it for application data-stuff users expect to stick around-you’re mixing up two different things.

What users experience as “application state” might include:

  • Form data they’re entering
  • Their sort/filter preferences
  • Expansion states in trees and lists
  • Any configuration they’d describe as “I set it to X”

None of this is truly ephemeral. Using component state for it is likely using the wrong tool.

Solution: application state needs to live outside component state. We might call it external or persistent state, it might be short lived and in memory only. The point is, it must be independent from the rendering cycle. React is a library for building user interfaces-its state is UI state, meant for rendering concerns. For application state that persists across component lifecycles, we need a different approach. And React does provide Context for this, which we’ll look at shortly.

External state needs to live somewhere in memory outside component, accessible to the parts of the app that need it. It also needs to change in response to user actions and events. And components need to know when it changes so they can re-render.

And, it is super important but about this later, state changes trigger effects: API calls, localStorage updates, analytics events, timers, etc. Real applications are full of side effects. State management should account for effects because they’re part of real applications, not something you can ignore or bolt on as an afterthought.

External store

Here is the external state distilled down to a minimum:

let count = 0; // lives outside component

function GlobalCounter() {
  const [, forceUpdate] = useReducer((x) => x + 1, 0);

  function handleClick() {
    count = count + 1; // changes in response to user action
    forceUpdate(); // let the componet know the state has changed
  }

  return (
    <>
      <p>Count: {count}</p> {/* accessible when we need it */}
      <button onClick={handleClick}>Up</button>
    </>
  );
}

Let’s take one small step from it to have a complete, working store.

// userStore.js
let state = { loggedIn: false, user: null };
let listeners = new Set();

export function login(user) {
  state = { loggedIn: true, user };
  listeners.forEach((fn) => fn());
}

export function logout() {
  state = { loggedIn: false, user: null };
  listeners.forEach((fn) => fn());
}

export function subscribe(listener) {
  listeners.add(listener);
  return () => listeners.delete(listener);
}

export function getState() {
  return state;
}

// We’re mutating here for simplicity. You can enforce immutability if needed.
// The core idea is independence from the render cycle. Immutability is orthogonal concern here.

Now hook it into React using the built-in useSyncExternalStore hook:

import { useSyncExternalStore } from "react";
import { subscribe, getState } from "./userStore";

function useUser() {
  return useSyncExternalStore(subscribe, getState);
}

function Header() {
  const { loggedIn, user } = useUser();
  return <div>{loggedIn ? user.name : "Guest"}</div>;
}

// somewhere else
login();

That’s it. This is state management. It’s a global variable in a module with a subscription mechanism. Done. Twenty-ish lines of code, no dependencies. This is not a toy example or a straw man. React provides useSyncExternalStore specifically for this - to hook external state into React’s rendering system. The rest is chrome added around it. The amount and kind of bells and whistles we add, depends on the task at hand. We might need very little to it, or quite a lot.

Also notice, the state variable is private-only the exported functions provide controlled access. As obvious it is, this is information hiding, a principle that’s worked since the 1970s. We’ll come back to this when we discuss Redux’s single global store and its complete violation of encapsulation - state for different features should be separated and private, exposing only a public API. The kind of thing which goes without saying but the opposite is made a central feature of Redux.

useSyncExternalStore hook

What’s useSyncExternalStore actually doing? It’s subscribing to your store and triggering a re-render when state changes. You could implement it yourself with a force-render trick: const [, forceRender] = useReducer(x => x + 1, 0), then call forceRender() in the subscription. But useSyncExternalStore is the right way to do it, so use it in production.

What Libraries Add

This minimal store re-renders all subscribers on any change. Libraries like Zustand or Valtio have finer-grained subscriptions to avoid unnecessary renders. But that’s an optimization, although a very important one.

Libraries like Zustand, Valtio, and Jotai are essentially polished versions of the pattern above, with better ergonomics:

/* Zustand */
import { create } from "zustand";

const useStore = create((set) => ({
  loggedIn: false,
  user: null,
  login: (user) => set({ loggedIn: true, user }),
  logout: () => set({ loggedIn: false, user: null }),
}));

const { loggedIn, user, logout } = useStore(); // inside a component

/* Valtio - proxy-based reactivity */
import { proxy } from "valtio";

const state = proxy({ loggedIn: false, user: null });
const login = (user) => {
  state.loggedIn = true;
  state.user = user;
};

const snap = useSnapshot(state); // inside a component

/* Jotai - atomic state */
import { atom } from "jotai";

const userAtom = atom({ loggedIn: false, user: null });

const [user, setUser] = useAtom(userAtom); // inside a component

These aren’t fundamentally different from userStore.js. They provide better API surface, dev tools, TypeScript support, and performance optimizations. But conceptually the same pattern, packaged nicely. Each library implements reactivity differently: Zustand with set-based subscriptions, Valtio via proxies, Jotai with atom dependency tracking. These differences affect performance and ergonomics but not the fundamental architecture - state in memory with a subscription mechanism.

React Context

React Context deserves special mention. It’s built into React and provides component-tree scoping.

const UserContext = createContext();

function UserProvider({ children }) {
  const [user, setUser] = useState(null);
  const [loggedIn, setLoggedIn] = useState(false);

  const login = (userData) => {
    setUser(userData);
    setLoggedIn(true);
  };

  const logout = () => {
    setUser(null);
    setLoggedIn(false);
  };

  return (
    <UserContext.Provider value={{ user, loggedIn, login, logout }}>
      {children}
    </UserContext.Provider>
  );
}

function useUser() {
  return useContext(UserContext);
}

It’s built-in, easy to do it type-safe, requires no dependencies, and works naturally with React’s component model. For state scoped to a subtree, or even the whole application, Context is frequently the best answer.

Context should be the default instinct, the first candidate to consider for application state. Before reaching for any state management library, consider whether React Context can solve your problem. Can your feature be implemented with a top-level Context provider? Or with Context scoped to a specific component subtree? To justify pulling in an external library, you need to answer: what does this library provide that Context doesn’t? Better ergonomics? Performance optimizations for frequent updates? DevTools? Global state completely outside React? There can be valid reasons-but you should have a reason. Context handles the majority of real-world needs perfectly well. Without a caveat of course. Context is ideal for low-frequency updates or scoped configuration. For high-frequency data (e.g., cursor position), you may want a more granular store, there might be performance issues if Context is misused.

How to choose

Here’s the key point: any state management solution that adds significant complexity needs to answer a specific question: What problem does your added complexity solve that simpler approaches don’t? And the benefit needs to be worth the cost.

In case of Redux, when a solution requires 500 lines of boilerplate, a new (and as we will see misleading) vocabulary to learn, architectural constraints on your entire app, and reduced flexibility - it better deliver something substantial in return.

Here are the few criteria to judge state management approaches:

  • Simplicity - Can you understand it quickly? Is there unnecessary complexity?
  • Encapsulation - Can you keep state private when needed? Prevent unintended access?
  • Effects - How naturally does it handle the reality that state changes trigger side effects? Is async coordination straightforward?
  • Types - Does TypeScript help you or fight you?
  • Performance - What’s the overhead? Can you subscribe to just parts of the state?
  • Testing - Can you test components in isolation without elaborate mocking?
  • Modularity - Can different parts of the app manage state independently? Can you lazy-load?

One more measure worth noting is locality of reasoning - how easily a developer can trace where and why a piece of state changes. A system that scatters updates across reducers, thunks, and middleware may satisfy other metrics yet fail here and Redux often does.

Choose whatever fits best for the task. But if you choose Redux Toolkit, you should be aware of implications of its design decisions.

The historical context is important here also - why the Redux has made it’s trade-offs and in what environment. These trade-offs may not be as relevant today as they were a decade ago.

3. Functional Cosplay

Before we get to the substance of Redux’s design decisions, we need to look at the surface - its vocabulary.
This isn’t pedantry, it’s necessity. We think in the terms we use. When a framework misuses established technical language, it obscures what it actually does. And when it borrows terminology from other domains, it can just import prestige without importing the real concepts.

Redux presents itself as rooted in functional programming. Its documentation invokes concepts from that tradition. But the words it borrows - reducer, thunk, pure function - already have precise meanings in computer science. Redux systematically misuses them. Let’s take a closer look.

The “Reducer”

The meaning is right there in the name: reduce. In functional programming a reduce (fold) consumes a finite collection and produces a single accumulated value:

// Actual reduction - consuming a collection
[1, 2, 3, 4].reduce((sum, n) => sum + n, 0); // → 10
// Input: array of numbers
// Output: single number

This operation has clear semantics: you have a finite collection, you process each element, and you get one result. The collection is exhausted. The operation terminates. You’ve reduced many things to one thing.

Redux’s “reducer” does none of this:

// Redux "reducer" - never reduces anything
function counterReducer(state, action) {
  switch (action.type) {
    case "INCREMENT":
      return state + 1;
    case "DECREMENT":
      return state - 1;
    default:
      return state;
  }
}
// This runs forever
// Never produces a "final" value
// Never consumes a finite collection
// Just state transitions, one after another

Practically a Redux reducer behaves like a state transition or update function. It takes current state and an event, produces next state, then waits for another event. And another. Infinitely. It’s an ongoing state transition function.

What should this be called? This pattern has a name: it’s an update function or a transition function. Elm, which Redux claims as inspiration, calls it update. It’s a fair name, it’s honest. It updates state based on messages. Clear, simple, accurate.

Yes, formally you can model state as a fold over the action history - but that formal mapping have misleading connotations for most developers.

There is a defensible formal view: “We’re reducing an infinite stream of actions over time into state”. A Redux store can be modeled as the result of folding (reducing) over an (ordered) sequence of actions. Given an initial state s0 and an action stream a1, a2, …, the state at time t is fold(reducer, s0, [a1..at]). This is the term is justified: reducers are the folding function. Technically, if you abstract the concept far enough, you could frame it that way. But at that level of abstraction, we could call it anything. “Iterating” over actions. “Computing” with actions. At some point, the term becomes so generic it conveys nothing. The whole idea of “reducing” actions into a final state seems to be relevant with time travelling feature of Redux. We will address time travel in the next sections. Suffice to say now, that time travel is pretty much nonsense in real world applications.

The purpose of a name is to convey meaning to the reader, not to sound sophisticated. When we say “sum these numbers” instead of “reduce over addition” we’re being specific and helpful. When Redux says “reducer” instead of “update function” it’s being vague and unhelpful, borrowing a term from functional programming to import legitimacy without importing clarity.

reduce aka fold

Yes, reduce does not necessary reduce - produce smaller value. It just a transformation operation which applies a function on values in the list. It’s common for that transformation to return a “larger” value.

-- this duplicates the list
foldr (\x l -> [x, x] ++ l) [] [1..5]
[1,1,2,2,3,3,4,4,5,5]

But the point is still the same. The fold transformation (or reducer) is the functional programming way to do a for loop. It’s much more useful to have a name for the operation that for loop performs - taking a sum, finding the maximum value, duplicating the elements - rather than call it just the for loop and let others to infer it’s purpose. Updating the state - is the single purpose of the action! Just call it as like that!

React’s useReducer hook inherits this confused terminology from Redux. It’s equally misnamed - it’s a state updater that happens to take current state and an action. The “reducer” label adds no clarity.

The “Thunk”

The term “thunk” comes from ALGOL-60 (it ), describing delayed computation - wrapping an expression in a function so it can be evaluated later:

// Actual thunk - delayed evaluation
const expensiveThunk = () => computeExpensiveValue();
// Call it when you need the value:
const result = expensiveThunk();

The thunk delays when the computation happens. That’s the whole concept: lazy evaluation, deferred execution, passing around unevaluated computation.

Redux’s “thunk” is nothing like this:

// Redux "thunk" - not a thunk at all
const fetchUser = (userId) => async (dispatch, getState) => {
  const response = await api.getUser(userId);
  dispatch(setUser(response));
};

This is just a higher-order function that returns an async function with closure over dispatch and getState. There’s no delayed evaluation, no lazy computation, no thunking happening here. Redux’s thunk is an action shape that middleware recognizes and executes. One that does some effectful work and updates the state (dispatch). Yes, the dispatch is “delayed” but it has nothing to do with laziness in the programming languages sense - where the terms originates from. In Redux it’s just a wayt to expressing imperative logic and side effects.

The term “thunk” adds nothing but confusion. Developers familiar with actual thunks wonder what laziness is being achieved. New comers learn the wrong definition. Either way, clarity suffers.

What should this be called? It’s an async action creator. Or more plainly: application logic that updates state. That’s what it is. That’s what it does. The “thunk” terminology obscures something simple. I cringe every time I need to pronounce that word in context of Redux.

The Pattern of Abuse

These aren’t isolated mistakes. They show a consistent pattern:

Reducer - Borrowed from functional programming (fold/reduce). Actual reducers consume finite collections and produce final values. Redux’s version does neither, it is just ongoing state transition.

Thunk - Borrowed from programming language theory (lazy evaluation). Actual thunks delay computation. Redux’s version executes immediately.

Pure functions - Redux enforces purity in reducers as a first-class constraint; handling real-world effects requires separate layers (middleware), which introduces indirection and cognitive overhead. We will look at it close in the next section.

Actions - A reasonable term, though “events” or “messages” would be clearer.

In each case, Redux imports terminology from domains with theoretical rigor, but doesn’t import the concepts themselves. The result is borrowed prestige masquerading as understanding.

Why Terminology Matters

Imprecise terminology isn’t just annoying - it prevents clear reasoning about systems. When you call something a “reducer”, you communicate expectations about what reducers do and how they behave. Then when the reality doesn’t match, you’re forced to learn Redux-specific meanings that conflict with established one.

This cognitive overhead serves no purpose. It doesn’t make the code more expressive or the abstractions clearer. It makes Redux appear more theoretically grounded than it is.

The purpose of names is to convey the meaning to the reader, not to sound sophisticated. Haskell can afford “smart” names, it has different target audience - functional programming language researchers. Web developers need clarity.

Clearing away the terminological fog helps reveal deeper problems. Why does Redux need “thunks” at all? Why does it need middleware to intercept functions? Why is there such elaborate ceremony around what should be straightforward application logic?

The answer isn’t in the terminology - it’s in the architecture underneath. Redux has made some fundamental choices about what state management is, and these choices generates all these complications. The misused terms are symptoms. The actual problem is architectural.

Redux treats state management as pure transformation (state, action) -> newState.

That assumption - seemingly reasonable - drives the rest of Redux’s design: the middleware, the async workarounds, the ceremonies. In the next section we’ll examine this model of state as pure transformation, and why it collapses under real-world side effects.

4. State as Pure Transformation

Redux models state management as pure state transformation. It’s an elegant idea - and a limited one. It works in theory, and in small toy examples. But in real applications, the abstraction breaks down. As anyone who has built even moderately complex systems in a purely functional style knows, pure transformation alone is not enough. It’s been known since the dawn of time that’s not enough.

The Pure Model

Redux’s conceptual foundation is elegant:

(state, action) => newState;

A function that takes current state and an action, returns next state. Pure. Deterministic. Testable. No side effects, no hidden dependencies, no surprises. Given the same inputs, you always get the same output.

This design borrows directly from functional programming. Pure functions are easier to reason about, test, and compose. So Redux rests on top of the idea of pure state transformation.

But there’s a problem: state management in web applications is not just about transforming data. (Or more generically, any useful program is not just about transforming data). Real programs do more than compute values - they interact with the world.

The Reality of Effects

Real applications don’t just transform state. They cause and respond to effects:

// User clicks "Login"
// We need to:
// 1. Update UI state (loading spinner)
// 2. Make API call (HTTP request)
// 3. Store token (localStorage)
// 4. Track analytics (external service)
// 5. Start refresh timer (setTimeout)
// 6. Update application state (user data)

// Redux's model: (state, action) => newState
// Reality: state and effects are intertwined

State changes trigger effects, and effects trigger more state changes. They’re not separate concerns that can be cleanly divided-they’re fundamentally intertwined in application logic.

Functional programming recognizes this tension. Haskell - a purely functional language - dedicates an entire type system to modeling effects: the IO monad, algebraic effects, and so on. Simon Peyton Jones once put it succinctly: “Systems without side effects are safe but useless; systems with side effects are useful but unsafe.” The challenge is managing that trade-off, not pretending it doesn’t exist. Simon Peyton Jones and Erik Meijer have a wonderful discussion about this. It’s a funny video. Worth searching for “Simon Peyton Jones - Haskell is useless” to watch it.

Redux chose the “safe but useless” path first: enforce purity in reducers and leave effects to something else. When the model proved too limited for real use, the middleware were created as a separate package. Only later - years afterward - did Redux Toolkit bundle middleware by default, effectively admitting that no real application could function without it. Redux-Toolkit doesn’t ignore effects; it externalizes them. They exist outside the core abstraction, plugged in later to fill the gap. That makes effects an architectural afterthought - they were never part of the model’s foundation.

This is the same pattern we see in other frameworks when theory meets production: an elegant abstraction (purity) requires pragmatic escape hatches (middleware, thunks, sagas) to survive contact with reality.

Redux claims to be rooted in functionall programming. The irony is that you can’t fully apply functional programming principles without clear way to handle effects. It’s a major topic in functional programming world. Redux treats effects as plugins to be bolted on.

Another major thing is functions as values. It is a bedrock of functional programming. Redux fails here also, forbids it requiring serializable state. Then functions as values bolted on with middlewares though “thunks”.

While technically optional, the official Style Guide labels serialization as “Essential” and instructs developers to “abide by it at all costs.”

Serializable state deserves separate discussion, to which we will come back to.

The “Thunk” Ceremony

Redux eventually needed an escape hatch for doing work with effects. Something called “thunk” became that mechanism. A function that has access to dispatch and getState, does whatever effectful work it needs to do, then dispatches actions to update state.

// What a "thunk" actually is
async function doSomethingAndUpdateState(dispatch, getState) {
  const data = await api.fetch(); // Effect
  dispatch(setData(data)); // State update
}

It’s just application logic: do something, update state.

But Redux can’t just say “here’s a function that does effects and updates state.” That would be admitting that the pure model is insufficient. So instead, it creates ceremony:

  1. Call it a “thunk” (sounds theoretical and meaningful)
  2. Make it go through the dispatch mechanism (maintains the illusion of “dispatching actions”)
  3. Require middleware to intercept it (adds architectural complexity)
  4. Treat it as a special case (instead of acknowledging it’s the normal case)

The result: simple application logic gets wrapped in layers of abstraction. A function becomes a “thunk.” Calling it requires “dispatching” it. The framework needs middleware to “handle” it. All to avoid admitting that the pure (state, action) -> state model was never sufficient for real applications.

What is this really? Application logic with side effects. Every program has it. Redux simply adds enough ceremony to make it look exceptional.

The AsyncThunk Ceremony

The pure transformation model is awkward for synchronous effects. For asynchronous work, the ceremony grows even heavier.

What does simple async application logic look like?

// What you want to write
async function fetchUser(userId) {
  const user = await api.getUser(userId);
  return user;
}

This is application logic: call an API, get data back, use it. But Redux can’t just let you write this. That would be admitting that async operations and state updates should be unified. So in Redux Toolkit, you write this instead:

const fetchUser = createAsyncThunk("user/fetch", async (userId) => {
  const user = await api.getUser(userId);
  return user;
});

extraReducers: (builder) => {
  builder
    .addCase(fetchUser.pending, (state) => {
      state.loading = true;
    })
    .addCase(fetchUser.fulfilled, (state, action) => {
      state.user = action.payload;
      state.loading = false;
    })
    .addCase(fetchUser.rejected, (state, action) => {
      state.error = action.error;
      state.loading = false;
    });
};

To perform a simple async operation, you now need:

  1. Wrap your async function in special createAsyncThunk
  2. Give it a string identifier (manual namespace management?)
  3. Write state update functions for three separate action types (pending, fulfilled, rejected)
  4. Update state in three different places (spreading logic across the file)
  5. All to do what every async operation does: start, succeed or fail

And if you just want to perform a side effect without updating state, but still have access to it during the operation - say, log an analytics event - you’re still forced through the same ceremony.

This isn’t architecture. This is fighting the framework to do what JavaScript handles naturally.

The unwrap()

AsyncThunk doesn’t return your data directly. It wraps it in an action object:

// What you want:
const user = await fetchUser(123);
console.log(user.name);

// What Redux gives you:
const resultAction = await dispatch(fetchUser(123));
console.log(resultAction);
// {
//   type: 'user/fetch/fulfilled',
//   payload: { id: 123, name: 'John' },
//   meta: { requestId: 'abc', arg: 123 }
// }

// Must unwrap to get the data:
const user = await dispatch(fetchUser(123)).unwrap();

Why? So you can access metadata (request ID, original arguments, action type). But 99% of the time, you just want the data. The common case requires an extra method call. The rare case (accessing metadata) could have been the special case.

Error handling becomes awkward too. Consult the docs for the full details and try to make sense when to unwrap() and when not to and why it matters.

The Extra Argument Problem

Redux Toolkit recommends using extraArgument for dependency injection:

const store = configureStore({
  middleware: (getDefaultMiddleware) =>
    getDefaultMiddleware({
      thunk: {
        extraArgument: {
          authApi: new AuthAPI(),
          stripeApi: new StripeAPI(),
          dbConnection: new Database(),
          logger: new Logger(),
        },
      },
    }),
});

Now any thunk can access these:

const someThunk = createAsyncThunk("feature/action", async (_, { extra }) => {
  await extra.authApi.getToken();
  // Use the API
});

The pattern creates a global dependencies object. Every thunk can access every API in extra. The payment slice can access auth’s private APIs. The UI slice can access database connections. There are no boundaries, convenient, but the opposite of encapsulation.

The Evolution of Failure

Redux’s handling of async has evolved through multiple attempts, each an admission that the previous approach didn’t work well:

Redux alone - Can’t handle promises at all. Reducers must be synchronous.

Redux-thunk - Dispatch functions instead of actions. Works, but verbose:

const fetchUser = (id) => async (dispatch) => {
  dispatch({ type: "USER_LOADING" });
  try {
    const user = await api.getUser(id);
    dispatch({ type: "USER_SUCCESS", payload: user });
  } catch (error) {
    dispatch({ type: "USER_ERROR", error });
  }
};

Redux-saga - An attempt to use generator functions for async. Added enormous complexity with domain-specific syntax (yield call, yield put) and is now effectively a dead project. A failed solution to a problem Redux created.

Redux Toolkit’s createAsyncThunk - Generates loading/success/failure actions automatically. Better than manual thunks, but you still need verbose extraReducers boilerplate, and adds unwrap() for normal cases.

RTK Query - Give up on the Redux model entirely for async state. RTK Query is essentially React Query built on top of Redux Toolkit - a completely different API that effectively admits the core model can’t handle async well.

Each phase adds complexity to work around the same root problem: the pure transformation model doesn’t fit asynchronous reality.

Libraries designed with async as a first-class concern don’t have these hacks.

The Sane Way

Async is natural in JavaScript:

async function fetchUser(userId) {
  const user = await api.getUser(userId);
  return user;
}

const user = await fetchUser(123);
// Direct, simple, obvious

This is what developers expect. Async operations are just functions that return promises. No ceremony, no special wrapping, no extra concepts.

SWR

const { data: user, error } = useSWR(`/api/users/${userId}`, fetcher);

Zustand

const useStore = create((set) => ({
  user: null,
  fetchUser: async (id) => {
    const user = await api.getUser(id);
    set({ user });
    return user; // Revolutionary! Just return what you want!
  },
}));

// Use it:
const user = await useStore.getState().fetchUser(123);

No dispatch. No actions. No reducers. No thunks. No unwrap. Just: here’s how to fetch the data, here’s the result. State and async operations are unified from the start.

Why This Model Fails

Redux model (state, action) -> newState is too narrow. State management in real applications requires unified handling of:

  • State (the data)
  • Behavior (the operations that modify it)
  • Lifecycle (mount/unmount, initialization, cleanup)
  • Effects (async calls, subscriptions, timers, I/O)

State without effects is useless. Effects without state coordination are chaos. They must coexist in one coherent model.

Redux’s entire architecture - thunks, middleware, sagas, RTK Query - exists to work around this fundamental mismatch between the model and reality. Each layer of complexity is an attempt to bolt effects onto a model that pretends they don’t exist.

Web applications aren’t pure transformations. They’re living systems full of side effects. Any architecture that treats effects as an afterthought will struggle, and Redux struggles systematically.

Next, we’ll see how Redux’s single store constraint makes everything even worse, making encapsulation and modularity nearly impossible.

5. The Single Store Constraint

The Redux Style Guide is explicit: “A standard Redux application should only have a single Redux store instance, which will be used by the whole application.” This is marked as an Essential rule - a guideline the documentation says to “abide by at all costs.”

Why insist on one store? The typical justifications are:

  1. Performance - multiple stores communicating would create performance problems.
  2. Simplicity - it’s simpler when different parts of state can access each other.
  3. DevTools / Time Travel - debugging and time travel require a single snapshot of all state.
  4. Single Source of Truth - keep application state centralized.

At first glance these sound reasonable. But each justification fails under closer look. More importantly, the single-store constraint violates a bedrock software-engineering principle: encapsulation.

Weak justifications

Performance. The claim that multiple stores imply poor performance is too simplistic. A single global store can create a scenario where any change looks like it might affect the whole app, leading to coarse subscriptions and unnecessary re-renders. Multiple small stores, when implemented with fine-grained subscriptions, often avoid this problem because updates are localized. Modern libraries (Zustand, Valtio, Jotai) are explicit about subscription granularity and avoid the global re-render problem by design. That said, cross-store communication must be designed carefully — naïvely implemented cross-store listeners can be costly. So localize updates and tune subscription granularity, not force a global bucket of state. Modern state management libraries like Zustand, Jotai, and Valtio support multiple stores without performance issues.

Simplicity. The “simplicity” of one store is actually global coupling. When any feature can read or write any other feature’s internals, you lose modularity. True simplicity comes from well-defined interfaces, not universal access. Everything accessible to everything isn’t simple, it’s chaotic.

DevTools and Time Travel. Centralized state makes it trivial to snapshot and time-travel Redux-managed data, which is probably other reason for single-store recommendation. But that benefit is partial: most real applications have significant state outside Redux - browser history, server state, WebSocket connections, timers, localStorage, third-party SDKs - none of which are captured by Redux time travel. The result can be misleading snapshots and impossible-to-replay application states. Time travel is not a sufficient reason to centralize everything. We will come back to this point in the next section.

Single Source of Truth. In practice your application’s “truth” is distributed: server, URL, browser storage, the DOM, external services, and whatever is in memory. Redux state is one source among many. Treating it as the single source encourages poor design decisions and creates a false sense of control.

Why Encapsulation Matters

Encapsulation is not a buzzword - it’s a proven technique for managing complexity. David Parnas wrote the canonical argument in “On the Criteria to Be Used in Decomposing Systems into Modules”: modules should hide implementation details and expose a controlled interface. The payoff is enormous: independent reasoning, safer refactoring, and predictable evolution.

As systems grow, the number of potential interactions between parts grows exponentially. Without boundaries, every part can depend on every other part. Change becomes impossible because everything is coupled to everything else.

Every mature engineering domain enforces boundaries: process isolation in operating systems, schema and access-control boundaries in databases, module visibility in programming languages, and API boundaries in distributed systems. The same reasoning applies to application state: private implementation details should not be globally visible.

When you collapse all state into one global object, you remove those boundaries. Everything becomes observable and therefore fragile. Encapsulation is not preference, it is necessity. Without encapsulation, systems become unmaintainable as they scale.

Rant

Let’s stray away from the pop culture attitude (as Alan Kay warned). Let’s pay attention to the basics of engineering. And not let us end up in a which way the wind blows, where fashion, personal charisma and false authority takes us. Leaving reason bite the dust.

Redux’s Single Store Crimes

Redux puts all application state in one global object:

const store = configureStore({
  reducer: {
    auth: authReducer,
    router: routerReducer,
    checkout: checkoutReducer,
    admin: adminReducer,
    analytics: analyticsReducer,
    // ... every feature's state
  },
});

That creates a global data bag. Consider a startup flow that needs private runtime data:

// What you want: Private implementation
class StartupFlow {
  #state = "initial";
  #attemptCount = 0;
  #sessionToken = null;

  async start() {
    this.#state = "authenticating";
    this.#attemptCount++;
    const token = await authenticate();
    this.#sessionToken = token;
    // Complex flow with private state
  }

  // Only expose what's needed
  get isReady() {
    return this.#state === "ready" && this.#sessionToken !== null;
  }
}
private fields

If you used TypeScript for too long, the # prefix in JavaScript denotes truly private fields - inaccessible from outside the class.

This is properly encapsulated: private fields and a small public interface.

Redux, by contrast, encourages putting these implementation details into state, making them globally observable and mutable:

// What Redux forces: Everything public
const startupSlice = createSlice({
  name: "startup",
  initialState: {
    state: "initial",
    attemptCount: 0,
    sessionToken: null,
    tempData: {},
  },
  // These are your "private" implementation details
  // But they're in the global store
});

// Now ANYWHERE in the codebase:
const state = store.getState();
console.log(state.startup.sessionToken); // "Private" data accessed
console.log(state.startup.attemptCount); // Implementation detail exposed

You can document it as private. You can name it with underscores. You can write comments. It doesn’t enforce boundaries. It’s in the global store. Any code with access to store.getState() (which is all Redux code) can read it. Any code with access to dispatch can modify it.

There are no boundaries. Everything is public.

Slices: A Band-Aid

Slices are only organizational. Redux Toolkit’s “slices” (feature-namespaced reducers) improve organization and developer ergonomics, but they are not strong encapsulation. Namespacing is a convention; the underlying store is still global and fully accessible. Slices make code easier to find, not harder to misuse.

// Separate files, organized by feature
const authSlice = createSlice({ name: "auth" /* ... */ });
const checkoutSlice = createSlice({ name: "checkout" /* ... */ });
const adminSlice = createSlice({ name: "admin" /* ... */ });

// Combined into single store
const store = configureStore({
  reducer: {
    auth: authSlice.reducer,
    checkout: checkoutSlice.reducer,
    admin: adminSlice.reducer,
  },
});

This looks like modularity. Each slice is in its own file. But it’s pure convention. The entire store remains globally accessible, there is no privacy.

// In checkout feature code:
function CheckoutButton() {
  const dispatch = useDispatch();
  const state = useSelector((state) => state);

  // Can access ANY slice's data
  console.log(state.auth.sessionToken); // Different slice
  console.log(state.admin.permissions); // Different slice

  // Can dispatch to ANY slice
  dispatch(adminSlice.actions.grantPermission()); // Wrong domain!
}

This is Redux recognizing the problem but unable to solve it without abandoning the single store constraint. So they add slices, a way to organize code that pretends there are boundaries while maintaining the architectural decision that prevents actual boundaries.

Once again it’s solving a problem Redux itself created.

Hyrum’s Law

Hyrum’s Law postulates: “With a sufficient number of users of an API, it doesn’t matter what you promise in the contract: all observable behaviors of your system will be depended on by somebody.” All observable behavior. :-)

Redux makes everything observable. Every field in every slice is accessible via store.getState(). And in a large codebase with multiple teams, someone will depend on it.

// Team A: Builds checkout flow
const checkoutSlice = createSlice({
  name: "checkout",
  initialState: {
    items: [],
    _validationCache: {}, // Internal optimization
    _tempTotals: {}, // Intermediate calculation
  },
});

// Team B: Building marketing banner, under deadline pressure
function MarketingBanner() {
  // "Hey, checkout has cart item count!"
  const { _validationCache } = useSelector((state) => state.checkout);
  const hasItems = Object.keys(_validationCache).length > 0;

  return hasItems ? <Banner /> : null;
}

// Now Team A can't change _validationCache without breaking marketing
// Their "private" implementation detail is part of another team's code

The underscore prefix is a convention, not enforcement. You’ve documented it as internal. You’ve asked teams not to use it. But it’s there. It’s accessible. And under deadline pressure, someone uses it.

Now you can’t refactor. Your internal implementation has become an external API that other code depends on.

The Dependency Hub

RTK’s extraArgument pattern for dependency injection illustrates the hub effect:

// Creating the store requires knowing about EVERYTHING
const store = configureStore({
  reducer: {
    auth: authReducer,
    checkout: checkoutReducer,
    admin: adminReducer,
    // ... all reducers
  },
  middleware: (getDefaultMiddleware) =>
    getDefaultMiddleware({
      thunk: {
        // all external dependencies
        extraArgument: { authApi, stripeApi, db, analytics, ws, storage },
      },
    }),
});

Now the store configuration must know about almost every service. Each thunk can access the entire extra object, effectively making the store a dependency hub:

Everything -> Store -> Everything

The store is a hub that couples the entire application together. As a consequence, the store bootstrap must instantiate many services, hindering lazy loading and code splitting. Tests must replicate or mock a large dependency surface. Type boundaries blur — TypeScript typings for extraArgument leak across unrelated features.

// Single store makes code splitting nearly intractable:
// Store configuration imports everything
import authReducer from "./features/auth"; // + AuthAPI + configs
import adminReducer from "./features/admin"; // + AdminAPI + rbac lib
import analyticsReducer from "./features/analytics"; // + analytics SDK
import editorReducer from "./features/editor"; // + editor lib (1MB)

// Result: 2MB+ in initial bundle
// Regular users load admin features they can't access
// Mobile users load desktop-only features
// Free tier users load pro features

Dynamic reducer injection can mitigate some bundling issues but adds its own complexity (DevTools inconsistencies, lifecycle management, and type awkwardness). In practice, this pattern trades one set of problems for another.

The Testing cost

A single global store makes unit tests and refactoring heavier. To test a single feature, you often must construct and mock a lot of surrounding plumbing. With properly encapsulated feature stores, you can test modules in isolation and mock far fewer dependencies.

The Sane Way

A better pattern is to apply encapsulation and bounded contexts: each feature owns its state, dependencies, and public API. Example (Zustand-style):

// Auth store - isolated
const useAuthStore = create((set) => {
  // Private dependencies (closure scope)
  const authApi = new AuthAPI();

  // Private state (closure scope)
  let sessionToken = null;
  let attempts = 0;

  return {
    // Public API only
    isAuthenticated: false,
    user: null,

    login: async (credentials) => {
      attempts++;
      sessionToken = await authApi.authenticate(credentials);
      set({ isAuthenticated: true, user: decodeToken(sessionToken) });
    },

    logout: () => {
      sessionToken = null;
      attempts = 0;
      set({ isAuthenticated: false, user: null });
    },
  };
});

// Other code CANNOT access:
// - sessionToken (private state)
// - attempts (private state)
// - authApi (private dependency)
//
// They only see: isAuthenticated, user, login, logout

Multiple isolated stores with clear boundaries:

// Core domain stores - each brings its own dependencies
const useAuthStore = createAuthStore(); // + AuthAPI
const useUserStore = createUserStore(); // + UserAPI

// Feature stores - each brings its own dependencies
const useCheckoutStore = createCheckoutStore(); // + StripeAPI
const useAdminStore = createAdminStore(); // + AdminAPI + RBAC

// UI stores
const useModalStore = createModalStore();
const useToastStore = createToastStore();

// Each store:
// - Manages its own state privately
// - Brings its own dependencies
// - Exposes only what should be public
// - Can be tested independently
// - Can be lazy loaded with its dependencies
// - Can't accidentally access other stores' internals

With isolated stores, the bundler can split naturally because there’s no central hub coupling everything together. Features bring their own state management and dependencies. Lazy loading just works. Smaller, focused tests. Reduced accidental coupling and better refactorability

This is not novel - it mirrors module boundaries, microservices, and OS process isolation. Applying it to client-side state yields similar maintainability benefits.

Single Store Can Be Pragmatic

It’s fair to acknowledge that a single Redux store can make sense in limited circumstances.

For small applications or those with short lifespans, global coupling may never become a problem.
The simplicity of one store and one dispatch loop can be convenient when the system’s complexity doesn’t justify modularization.
Similarly, some domains are inherently global - dashboards with shared filters, synchronized real-time views, or tightly integrated editor-style UIs where components must constantly coordinate.
In these rare cases, the trade-off may be acceptable: you gain straightforward data flow and global introspection at the cost of isolation.

Tooling can also tilt the balance.
If yous use Redux DevTools for time-travel debugging, action tracing, or state inspection, centralizing state can simplify debugging more than it complicates architecture.
For small teams or internal tools, that’s might be a pragmatic compromise.

But even in those scenarios, Redux is not uniquely capable.
Modern state libraries like Zustand, Valtio, Jotai and others can provide the same single-store simplicity - global state shared across components - without Redux boilerplate, action ceremony, or architectural rigidity.
They accomplish what Redux Toolkit attempts, but with clearer boundaries, far less code, and no dependency on middleware conventions.
If a global store genuinely fits your problem domain, these lighter alternatives deserve serious consideration before reaching for Redux.

The single-store constraint is reasonable when complexity is low, but it becomes a liability as soon as multiple features or teams need to evolve independently.
Unfortunately, Redux presents it as a universal best practice - something to “abide by at all costs” - rather than what it truly is: an optimization for simple systems that doesn’t scale.

A Shift in Redux Philosophy

Interestingly, Redux itself has evolved away from the ideas that once justified its single-store architecture.

In classic Redux, actions were global events that any reducer could observe. A single USER_LOGGED_OUT action might cause several reducers - authentication, settings, notifications - to react independently.

In classic Redux, actions were global.
They represented system-wide events that any reducer could observe.
A single USER_LOGGED_OUT action might cause multiple reducers - authentication, settings, notifications - to respond independently.
This pattern matched the “single source of truth” concept: one global event stream, one global state tree.

Redux Toolkit changed that model. by generating namespaced action creators inside slices (e.g. auth/logout, checkout/addItem). That makes actions appear local to a slice, but it doesn’t change the fundamental runtime: actions are still broadcast through the global dispatch pipeline and every reducer can observe them. The difference is mainly organizational and ergonomic, but it abandons one of Redux’s original distinct idea - the idea of globally observable actions flowing through a single reducer pipeline. A half-step towards modularity and multi-store.

If actions are already local to slices, the rationale for a single global store largely disappears.

Summary

The single-store rule trades modularity and encapsulation for global observability. In practice, those tradeoffs produce:

  • Coupling via observability - private details become relied-on APIs (Hyrum’s Law)
  • Refactoring friction - internal changes break remote consumers
  • Dependency hub - central instantiation and coupling of services
  • Bundle and loading complications - harder code splitting and lazy loading
  • Testing bloat - tests require reconstructing broad parts of the app
  • Maintenance becomes intractable - Changes in one part ripple through the coupled system

Encapsulation is a fundamental tool for managing complexity. Redux single-store constraint makes applying it difficult.
While it can be pragmatic in small or highly coupled systems those are the exceptions - not the rule.
And even then, modern state libraries achieve the same outcomes with simpler architecture and less ceremony.

Next we’ll look at serialization and time-travel in more detail.

6. The Time-Travel Illusion

Redux DevTools’ time-travel debugging is often cited as one of its defining features.
The demo is compelling: step backward and forward through dispatched actions, watch state rewind and replay, explore how each action transformed the store.
It looks powerful. It looks like the future of debugging.

There’s one problem: in any non-trivial application, time travel is impossible.

This isn’t a configuration issue or a limitation of the DevTools or “you don’t use Redux correctly”. It’s a conceptual impossibility, that follows from how web applications actually work. Understanding why time travel fails also reveals why Redux’s “serializable state” rule - the constraint designed to make time travel possible - ends up being pure overhead.

Living in a Buble

Redux’s mental model is simple:

Application State = Redux Store

Everything that matters lives in the store. Change the store, and you’ve changed the application. Rewind the store, and you’ve rewound the application. This model makes time travel sound plausible: just replay actions to recreate any historical state.

But this model is fiction.

Real Application State

Your application isn’t just Redux state. In a real web application, Redux state is only a fragment of total system state:

Application State =
  Redux Store +                       // What Redux controls
  Server State +                      // Actual backend state
  URL + Browser History +             // Navigation state
  localStorage + sessionStorage +     // Persistent browser data
  IndexedDB +                         // Client-side databases
  Cookies +                           // Cookie jar contents (your's and 3rd-party)
  Service Worker State +              // Background workers
  WebSocket Connections + Queues +    // Live connections and pending messages
  WebRTC Media State +                // Peer streams and media
  Canvas/WebGL State +                // Graphics contexts
  Audio Context State +               // Web Audio API graphs
  Timer State +                       // setTimeout / setInterval callbacks
  Animation State +                   // requestAnimationFrame, CSS transitions
  Scroll + Focus + Selection State +  // Viewport and focus positions
  Third-Party SDKs + iframes +        // Embedded service state
  In-Flight Network Requests +        // Active XHR/fetch calls
  File System / Notification APIs +   // OS integrations
  Any other statefull library +       // Calendars, trees, visualization, etc.
  Clipboard, Geolocation, Network, Battery, etc. + // Browser APIs
  Time Itself                         // Date.now(), clocks

Redux controls the first line. Everything else marches forward independently. When you “time travel” in Redux DevTools, you’re rewinding at most 5% of your application’s actual state while the remaining 95% continues to move forward in real time.

An Example

Let’s walk through what happens when time travel encounters reality. Of course it’s contrived, but it shows the point.

function CheckoutFlow() {
  const { cartItems, status } = useSelector((state) => state.checkout);

  const handleCheckout = async () => {
    dispatch(setStatus("processing"));

    // External state change #1: Payment service
    const paymentId = await stripe.createPaymentIntent({
      amount: calculateTotal(cartItems),
      currency: "usd",
    });

    // External state change #2: Server database
    await api.reserveInventory(cartItems);

    // External state change #3: Browser storage
    localStorage.setItem("pendingPayment", paymentId);

    // External state change #4: Analytics service
    analytics.track("checkout_initiated", { items: cartItems.length });

    // External state change #5: Timer scheduled
    const timeoutId = setTimeout(() => {
      dispatch(setStatus("timeout"));
      api.cancelReservation();
    }, 300000);

    dispatch(clearCart());
    dispatch(setStatus("complete"));
  };

  return <button onClick={handleCheckout}>Complete Purchase</button>;
}

Now you open Redux DevTools and “time travel” back to before the checkout button was clicked.

Redux state: ✅ Reverted to { cart: [...items], status: 'pending' }
Everything else: ❌ Still in the present

Stripe still holds a payment intent
The backend still has reserved inventory
localStorage still has pendingPayment
Analytics already recorded an event
The timeout is still scheduled, will fire and cancel a “non-existent” reservation
User’s bank may have already processed the charge

Your app now represents an impossible world. Redux says checkout hasn’t started, your services say it already completed. Clicking “Checkout” again might double-charge the card or double-reserve inventory. This isn’t a programming mistake - it’s the unavoidable outcome of trying to rewind a distributed system locally.

The Timer Paradox

Timers deserve special attention because they’re ubiquitous and time travel breaks them completely:

function NotificationManager() {
  const dispatch = useDispatch();

  const scheduleNotification = (message, delay) => {
    dispatch(addPending(message));

    setTimeout(() => {
      dispatch(showNotification(message));
      dispatch(removePending(message));
    }, delay);
  };

  return (
    <button onClick={() => scheduleNotification("Hello", 10000)}>
      Schedule Notification
    </button>
  );
}

Click a button to schedule a notification in 10 seconds. Then rewind state. Redux cleared pending and shown flags and in reality timer still running.

When the timer fires, it dispatches actions from a “future” that for Redux doesn’t exist. The system violates causality - a clear demonstration that Redux’s time axis is disconnected from real time.

This isn’t an edge case. Any logic involving delayed or periodic work - retries, session expiry, cache TTLs, animations, polling - breaks time travel in exactly this way. The scheduled work executes based on real time, not Redux’s rewound time.

The Router State Problem

Single-page application routing is another source of state that Redux doesn’t control:

function CheckoutButton() {
  const navigate = useNavigate();
  const dispatch = useDispatch();

  const handleCheckout = async () => {
    dispatch(startCheckout()); // Redux state change
    history.push("/checkout/payment"); // Router state change

    const result = await processPayment();

    if (result.success) {
      dispatch(completeCheckout()); // Redux state change
      history.push("/checkout/success"); // Router state change
    }
  };
}

After successful checkout, Redux says “complete” and the router shows /checkout/success. Rewind Redux, and you return to the pre-checkout state - but the URL and browser history remain forward. We are now on /checkout/success even though Redux thinks no checkout occurred. A refresh loads the success page without data, causing impossible state.

Early Redux projects tried to “solve” this with libraries like Connected React Router or Redux-First Router, which forced navigation through Redux dispatch. Both are now effectively abandoned. We learned the hard way that mixing routing (a browser concern) with Redux state adds complexity and latency without benefit. Routing simply doesn’t belong in Redux’s model - time travel cannot coherently include navigation.

Server Side Rendering

Today React applications increasingly rely on server-side rendering (SSR).
SSR introduces a further break in Redux’s “single timeline” assumption.
Each request creates a fresh Redux store on the server, dispatches actions to fetch data, renders HTML, and then discards the store.
The client receives a serialized snapshot of that store during hydration, but not the action history that produced it.
As a result, the client cannot “rewind” past hydration - there’s no continuous timeline to replay. Time travel, already incoherent for effects, becomes meaningless in SSR contexts: every page load resets time to zero.

Can We Time Travel?

To truly time-travel a web application, you’d need to snapshot and restore every source of state listed above.

Some are theoretically possible but impractical - browser storage, scroll positions, history, focus. Others are impossible: JavaScript provides no API to list pending timers or reify animation frames. Still others are physically impossible: you can’t rewind network requests, cancel events already emitted to third-party services, or revert backend database writes without server coordination. You can’t rewind Stripe, or Auth0, or Google Analytics. And you certainly can’t rewind time - Date.now() always moves forward.

Web applications are distributed systems spanning clients, servers, and services. Time travel in a distributed system requires global coordination across all participants. Redux assumes local replay is enough. It isn’t.

Time travel can be useful in very narrow contexts: small demos like TodoMVC, pure UI state (modal visibility, tab selection), or visualizing reducer transitions during development. The common thread: the absence of external effects. Once your app talks to APIs, schedules work, or persists anything, the illusion collapses. Production applications call APIs, maintain timers, interact with external services, and coordinate with state outside Redux.

When was last time you used time travel for debugging production issues. The real debugging workflow is the same as it’s always been: console.log(), breakpoints, and network inspectors. Redux’s time-travel feature was a powerful demo but not more than that.

Summary

Time-travel debugging in Redux is impossible for real applications. Web apps are distributed systems where Redux controls only part of total state. Rewinding that part while everything else continues forward creates impossible states. The feature doesn’t just fail in practice - it’s conceptually incoherent.

Yet this illusion drives Redux’s another “essential” restrictive constraint: the insistence that all state must be serializable, so it can be recorded and replayed. In the next section, we’ll look at how that serialization requirement manifests in real code - and why it imposes significant costs for, as we just saw, marginal practical gain.

7. The Serialization Constraint

Redux’s Style Guide is explicit: “Do not put non-serializable values in state or actions.” This rule is marked as Essential. Redux Toolkit warns when non-serializable data enters the store by default. The justification is straightforward: serializable state enables time-travel debugging and makes state persistence straightforward.

We’ve already seen that time travel fails to be a meaningful feature in real applications. Now let’s see what happens when serialization constraint collides with the practical needs of modern web apps.

Real Application State

Web apps interact with a lot of runtime objects that are not JSON-serializable, and many of them are ordinary, mainstream platform features:

Common non-JSON values you encounter routinely

// Functions as values
const handler = () => doSomething();

// Maps and Sets
const cache = new Map();
const ids = new Set();

// Class instances with methods
class DataProcessor {
  process(d) {
    /*...*/
  }
}
const processor = new DataProcessor();

Browser and platform APIs

// Live connections and handles
const ws = new WebSocket("wss://...");

// File system handles
const fileHandle = await window.showOpenFilePicker();

// Web Audio / Canvas / WebGL contexts
const audioCtx = new AudioContext();
const gl = canvas.getContext("webgl");

// MediaStreams, IndexedDB connections, AudioNodes, etc.

Third-party SDKs:

const stripe = Stripe(...);
const auth0 = new Auth0Client(...);

These are not exotic edge-cases. They’re first-class platform and JavaScript entities. Redux warns about them all.

JSON vs structured clone

Redux’s serialization constraint is out of sync with the web platform itself.

Someone might respond: “But Map and Date are serializable - why complain?” There is an important distinction.
JSON serializability (JSON.stringify) is the narrowest notion: many useful types fail (Map, Set, Date, ArrayBuffer, circular graphs).
Structured-clone serializability (what structuredClone(), postMessage() and IndexedDB provide) supports many more types: Map, Set, Date, typed arrays, ArrayBuffer, etc. These are perfectly serializable data types. Not in a JSON sense serializable. But you can carve out the JavaScript Map on a piece of wood, come back later and perfectly reconstruct the Map again from that piece of wood.

Redux’s recommendation aligned with simple JSON-style thinking. Modern browsers do more, but Redux’s “serializable” rule is stricter than the structured-clone capability of the platform, and you can find the reasons. But, the strict ban still forces awkward workarounds in everyday code.

Often Disabled Anyway

Search for serializableCheck: false, ignoredActions, or ignoredPaths in repos and you’ll find the pattern repeated. The check if frequently disabled in the app. This is not “developer ignorance”, it’s pragmatic necessity.

Redux Toolkit exposes the serializability check as configurable middleware - you can relax it - but the prevalence of disabling it demonstrates the underlying mismatch: the rule is often too blunt for real systems.

Example: notifications via functions

A simple notification object that carries callbacks is idiomatic JavaScript:

const notification = {
  id: generateId(),
  message: "Settings saved",
  onAction: () => showDetails(),
  onDismiss: () => cleanup(),
};
addNotification(notification);

This is straightforward JavaScript. The notification is an object that carries its behavior. Functions are first-class values. This is how JavaScript works.

Redux forbids storing onAction directly. Functions aren’t serializable.
The common workarounds are all worse:

Option 1: Indirect callback registry
Maintain a Map(id => callback) outside Redux and keep only IDs in the store. Now you have to coordinate lifecycle and cleanup manually. You’ve separated data from behavior. They must be manually coordinated through string identifiers. You’ve transformed a simple thing (store an object with methods) into a synchronization problem.

// Store callbacks separately
const callbacks = new Map();

const notification = {
  id: generateId(),
  message: "Settings saved successfully",
  actionId: "show-details-123", // String identifier
  onDismissId: "cleanup-456",
};

callbacks.set("show-details-123", () => showDetails());
callbacks.set("cleanup-456", () => cleanup());

dispatch(addNotification(notification));

// Later, when notification is clicked:
const callback = callbacks.get(notification.actionId);
if (callback) callback();

Option 2: Central switch/handler.
Store an actionType and have a central handler switch on it. Adding a new action requires modifying global logic. Encapsulation is lost. Just imagine if I say all button’s onClick handlers must be centralized in one big switch statement.

//Hardcode action types
const notification = {
  id: generateId(),
  message: "Settings saved successfully",
  actionType: "SHOW_DETAILS",
};

// Later, in some centralized handler:
function handleNotificationAction(notification) {
  switch (notification.actionType) {
    case "SHOW_DETAILS":
      showDetails();
      break;
    case "OPEN_FILE":
      openFile();
      break;
    case "REFRESH_DATA":
      refreshData();
      break;
    // ... all possible actions
  }
}

Compare that to the idiomatic JavaScript version above.

Browser APIs

WebSocket connections, audio contexts, media streams, canvas contexts, file handles - these represent live runtime resources. They are not just data, they carry network state, event handlers, buffers, and scheduler state. You cannot JSON.stringify them. You cannot meaningfully serialize an open socket.

The typical pragmatic approach is to keep such resources outside Redux - module singletons, React refs, or feature-local stores that encapsulate the resource and expose a serializable surface (IDs, metadata, status). These approaches work, but they replicate the very global mutable state Redux claimed to eliminate - and they do it because Redux’s serialization rule forbids putting the resource into the central store.

The Persistence Argument

Redux argues: serializable state simplifies persistence. But in practice you already choose what to persist. Production applications don’t persist everything. They cherry-pick:

const persistConfig = {
  key: "root",
  storage,
  whitelist: ["user", "preferences"], // Only these
  blacklist: ["ui", "temp", "cache"], // Never these
};

Apps only persist a small subset (user preferences, tokens). The rest doesn’t need persistence. Persistence is a boundary concern: you serialize at the point of persisting (IndexedDB, localStorage, server API) - not as a global constraint on every in-memory value. You serialize when marshalling data out of your system and deserialize when unmarshalling it back in. This happens at boundaries.

Modern options:

  • Use structured clone (IndexedDB) for richer types.
  • Use custom serializers (superjson, msgpack, protobuf) when cross-platform persistence is needed.
  • Serialize only whitelisted slices and keep runtime resources local.
// Store what you need in memory - any types
const state = {
  user: { name: "John" },
  wsConnection: new WebSocket("..."), // Non-serializable
  preferences: new Map(), // Non-serializable to JSON
  audioContext: new AudioContext(), // Non-serializable
};

// Then serialize at the boundary - choose what works for you:

// Option 1: IndexedDB with structured clone (native support)
await db.put("state", {
  user: state.user,
  preferences: state.preferences, // Map works natively
});

// Option 2: localStorage with superjson (extended types)
localStorage.setItem(
  "state",
  superjson.stringify({
    user: state.user,
    preferences: state.preferences, // superjson handles Maps
  })
);

// Option 3: Remote API with custom serialization
await api.save({
  user: state.user,
  preferences: serialize(state.preferences),
});

The principle: your in-memory representation shouldn’t be constrained by your persistence format. Store what makes sense in memory. Serialize when persisting.

Redux Gets It Backwards

Redux inverts this relationship. It makes serialization an architectural constraint that affects everything, everywhere, all the time, even state you never persist. This is the tail wagging the dog. Persistence is one concern among many. It shouldn’t dictate how you structure state throughout your application.

The Cost

Practically, Redux’s serialization rule forces teams into one of several choices

  • none great:

  • Keep runtime resources out of Redux (module-level state, refs), reintroducing global mutability outside Redux.

  • Implement ID-to-callback registries and manual lifecycle management (more code, more bugs).

  • Use lossy or brittle serialization.

  • Disable the serializability check (admit the rule doesn’t fit reality).

Workarounds prevail. That’s evidence the rule is misaligned with real application needs.

The Sane Way

Separate in-memory representation from persistence format. Store what you need in memory - including functions and handles. Serialize only at boundaries.

Other libraries take this approach: Zustand, Valtio, Jotai, and MobX let you store richer objects and serialize at boundaries as needed. They trust the developer to choose what to persist.

// Zustand - store whatever you need
const useStore = create((set) => ({
  user: null,
  wsConnection: null, // No warnings
  fileHandle: null, // No warnings
  audioContext: new AudioContext(), // No warnings

  setWebSocket: (ws) => set({ wsConnection: ws }),
  setFileHandle: (handle) => set({ fileHandle: handle }),
}));

// Valtio - proxy-based, no constraints
const state = proxy({
  canvas: null,
  glContext: null,
  notifications: [],
});

// Add notification with callback
state.notifications.push({
  message: "Done",
  onAction: () => doSomething(), // Just works
});

// Jotai - atoms can contain anything
const wsAtom = atom(null);
const fileHandleAtom = atom(null);

Summary

Redux enforces serialization everywhere to enable time-travel debugging (which doesn’t really work) and persistence (which you implement selectively anyway). In real world apps, that rule has a measurable cost: awkward indirections, lost encapsulation, duplicated registries, and code that lives outside Redux. Teams work around the rule regularly, which shows the constraint is the problem, not the application.

Serialization belongs at the boundary (persistence and inter-process communication), not as a universal architectural restriction. When you accept that, the right design is to encapsulate runtime resources and serialize explicitly where it actually matters.

8. API Surface and Ceremony

Developers frequently complain about Redux’s verbosity, the TypeScript friction, and the amount of boilerplate. Those complaints are valid - but they are symptoms, not the root problem. The ceremony emerges from Redux’s architectural choices: serializability, centralized state, and treating operations as data.

To be fair, it helps to recognize the reasons for some of Redux’s choices. Representing operations as plain objects makes them easy to log, inspect, and (in demos) replay. A single dispatch surface provides a convenient interception point for cross-cutting concerns: logging, analytics, persistence, and middleware. That said, these benefits are real but narrow - and the costs they incur are often larger than the benefits in production systems.

Observability

Yes, actions as plain objects make logging and inspection easy. But observability, like persistence, is a perimeter concerns, not reasons to reshape the language.

Instead of forcing actions to be plain objects: keep actions as first-class functions and make observability orthogonal. Instrumentation can be pluggable - wrap or decorate action functions to emit compact, serializable telemetry (action name, args, and a small, explicit snapshot) to DevTools or make a tracing system akin OpenTelemetry. This preserves ergonomics (plain functions, closures, local state) while giving the observability we need.

Instrumentation is a cross-cutting, pluggable concern - not an invariant that should dictate your in-memory model.

Actions

Actions are function calls expressed as data. This is quite akward.

// Redux: Describe the function call as data
const addTodoAction = { type: "todos/todoAdded", payload: "Buy milk" };
dispatch(addTodoAction);
// Somewhere else: reducer receives action, updates state

// What you actually want: Just call a function
addTodo("Buy milk");

Actions are function calls disguised as plain objects. The type is the function name. The payload is the parameter. dispatch() is the function call mechanism. Without middlewares, it’s akin to internal RPC mechanism running within the same single JavaScript program.

Redux turns every function call into an object with a type and a payload. This enables middleware and toolchains to observe and process operations uniformly, but it also forces every update through the same indirect path: create action -> dispatch -> reducer switch/effect -> state change.

That indirection has real costs:

Boilerplate: action constants, action creators, reducer cases, and dispatch calls for what supposed to be the simplest operation.

Cognitive noise: instead of calling a function that describes intent, you assemble and pass a data object, with the intent handled somewhere else.

Locality of reasoning: to understand what happens when you “add todo”, you must inspect the action creator, reducer, and any middleware that intercepts it.

Actions Ceremony

// Define the action structure
const TODO_ADDED = 'todos/todoAdded';

// Create an action creator
const addTodo = (text) => ({ type: TODO_ADDED, payload: text });

// Write the reducer case
case TODO_ADDED:
  return { ...state, todos: [...state.todos, action.payload] };

// Dispatch the action
dispatch(addTodo('Buy milk'));

Redux Toolkit reduces ceremony with createSlice and createAction, but it doesn’t eliminate the architectural pattern: operations are still described as data and routed through reducers.

Selectors

Reading state in Redux typically goes through selectors:

const selectUser = (state) => state.auth.user;
const user = useSelector(selectUser);

Selectors are more than indirection. They provide:

  • Encapsulation: a single place to compute derived data (e.g., fullName = first + last).
  • Performance: memoized selectors (reselect) avoid expensive recomputation and reduce re-renders by returning stable object references.
  • Stable contracts: code outside a feature doesn’t need to know the state shape; it uses the selector API instead.

That said, selectors add ceremony. When you compare it to local feature hooks that return values directly (const { user } = useAuth()), selectors feel verbose. The right lesson is not that selectors are useless, but that the API shape Redux exposes (global selectors + useSelector) is not the most ergonomic way to express feature-local behavior.

The Wrapper Pattern

Experienced teams end up wrapping Redux’s API to create what they actually wanted:

export function useAuth() {
  const user = useSelector(selectUser);
  const dispatch = useDispatch();

  const login = useCallback(
    async (credentials) => {
      dispatch(loginPending());
      try {
        const result = await api.login(credentials);
        dispatch(loginSuccess(result));
      } catch (error) {
        dispatch(loginFailure(error));
        throw error;
      }
    },
    [dispatch]
  );

  return { user, login };
}

Components then consume a clean function-API:

// Components use the semantic API
function LoginForm() {
  const { login } = useAuth();
  return <button onClick={() => login(credentials)}>Login</button>;
}

If you’re wrapping useSelector and useDispatch in every feature to create semantic functions, you’re building the API you wanted. Redux gave you dispatch and selectors, you build the straightforward semantic layer on top. The wrapper pattern reveals that Redux’s API doesn’t match how you want to structure code.

TypeScript Complexity

Typing Redux can be painiful in large projects. All the typing for RootState, AppDispatch, thunk return types, selector parameters, and action payloads. Oh, my!

As your store grows, adding slices, handling dynamic reducer injection, typing the extra parameter, TypeScript become harder and harder, type errors are cryptic.

There are mitigations. Still, the global and dynamic nature of Redux makes typing harder than with localized stores that expose simple function signatures. The result: types often feel like scaffolding you maintain rather than helpful documentation.

Boilerplate as Symptom

Redux Toolkit improves syntax but can’t eliminate the fundamental patterns. Actions, reducers, selectors, dispatch - these aren’t tooling limitations. They’re structural requirements of Redux’s architecture.

Redux Toolkit removes much of the repetitive syntax, but it can’t change the underlying pattern: actions -> dispatch -> reducers -> state. This pattern forces to write glue code. When you wrap Redux into local hooks, they are implicitly telling you two things:

  • The primitive API Redux provides is not the surface you want to program against.
  • The wrapper is the correct abstraction for the code: functions that encapsulate behavior and return state.

If you find your codebase wrapping useSelector and useDispatch into feature hooks everywhere, that wrapper is your true API. Prefer libraries or patterns that give you that API directly, rather than imposing a choreography you must constantly undo.

Alternatives

Modern libraries, like Zustand, Valtio, Jotai, and MobX, demonstrate that predictable state management doesn’t require a dispatch architecture or action objects.
They treat state as live data with direct update functions, not as an event log to replay later.

9. Hindsight

Redux wasn’t a mistake. It was an important evolutionary step.

When Redux appeared, it solved real problems. React had no hooks, no reliable story for global state. Redux brought clarity, predictability, and testability through a simple contract:

state = (previousState, action) -> newState

That idea was powerful. It made side effects explicit. It encouraged immutability before Immer, predictable updates before hooks, and a mental model that could be reasoned about in isolation.

But Redux is optimized for visibility over usability, for purity over practicality. Its architecture assumed that:

  • state can be centralized,
  • actions must be serializable,
  • effects are secondary,
  • and time travel is meaningful.

Modern frameworks and libraries have since moved past these assumptions.

  • React Query, SWR, and similar tools unify state and effects instead of separating them.
  • Zustand, Valtio, and Jotai treat state as reactive data, not as a serialized log of actions.
  • MobX showed us yet another way.
  • Even React itself evolved: hooks and context made global state an application pattern, not a framework.

We now understand that application state isn’t a single tree - it’s a topology of interacting subsystems: UI state, server cache, browser APIs, ephemeral data, and effects. There is no single “source of truth” but well-defined boundaries and responsibilities.

The issue isn’t that Redux was wrong. It’s that it is often mistook as the one-size-fits-all solution for state management.

Redux’s Legacy

Redux’s real achievement wasn’t its implementation, it was making state explicit.

Before Redux, state was hidden in components, mutable stores, and uncontrolled side effects. Redux forced developers to think about state transitions, determinism, and immutability.

We can appreciate Redux for what it taught us without continuing to build our systems around its limitations.

Today, Redux still works. For most modern applications, Redux is no longer the natural choice - and the justifications for choosing it today are increasingly weak.

Even for Redux creators themselves

Image dan-redux-tweet Image andrew-redux-tweet Image andrew-redux-tweet2