
private void readObject(ObjectInputStream in) throws IOException, ClassNotFoundException {
in.defaultReadObject();
this.score = calculateScore();
}Why order matters in custom serialization logic
When writing custom serialization logic, the order in which values are written must exactly match the order in which they are read:
private void writeObject(ObjectOutputStream out) throws IOException {
out.defaultWriteObject();
out.writeInt(42);
out.writeUTF("Duke");
out.writeLong(1_000_000L);
}
private void readObject(ObjectInputStream in) throws IOException, ClassNotFoundException {
in.defaultReadObject();
int level = in.readInt();
String name = in.readUTF();
long score = in.readLong();
}Because the stream is not keyed by field name, each read call simply consumes the next value in sequence. If readUTF were called before readInt, the stream would attempt to interpret the bytes of an integer as a UTF string, resulting in corrupted data or a deserialization failure. This is one of the main reasons custom serialization should be used sparingly. A useful mental model is to think of serialization as a tape recorder: Deserialization must replay the tape in exactly the order it was recorded.
Why serialization is risky
Serialization is fragile when classes change. Even small modifications can make previously stored data unreadable.
Deserializing untrusted data is particularly dangerous. Deserialization can trigger unexpected code paths on attacker‑controlled object graphs, and this has been the source of real‑world security vulnerabilities.
For these reasons, Java serialization should be used only in controlled environments.
When serialization makes sense
Java serialization is suitable only for a narrow set of use cases where class versions and trust boundaries are tightly controlled.
| Use case | Recommendation |
| Internal caching | Java serialization works well when data is short-lived and controlled by the same application. |
| Session storage | Acceptable with care, provided all participating systems run compatible class versions. |
| Long-term storage | Risky: Even small class changes can make old data unreadable. |
| Public APIs | Use JSON. It is language-agnostic, stable across versions, and widely supported. Java serialization exposes implementation details and is fragile. |
| System-to-system communication | Prefer JSON or schema-based formats such as Protocol Buffers or Avro. |
| Cross-language communication | Avoid Java serialization entirely. It is Java-specific and not interoperable with other platforms. |
Rule of thumb: If the data must survive class evolution, cross trust boundaries, or be consumed by non‑Java systems, prefer JSON or a schema‑based format over Java serialization.
Advanced serialization techniques
The mechanisms we’ve covered so far handle most practical scenarios, but Java serialization has a few additional tools for solving problems that default serialization cannot.
Preserving singletons with readResolve
Deserialization creates a new object. For classes that enforce a single instance, this breaks the guarantee silently:
public class GameConfig implements Serializable {
private static final long serialVersionUID = 1L;
private static final GameConfig INSTANCE = new GameConfig();
private GameConfig() {}
public static GameConfig getInstance() {
return INSTANCE;
}
private Object readResolve() throws ObjectStreamException {
return INSTANCE;
}
}Without readResolve, deserializing a GameConfig would produce a second instance, and any identity check using == would fail. The method intercepts the deserialized object and substitutes the canonical one. The deserialized copy is discarded.
Substituting objects with writeReplace
Whereas readResolve controls what comes out of deserialization, writeReplace controls what goes into serialization. A class can define this method to substitute a different object before any bytes are written.
The two methods are often used together to implement a serialization proxy. One class represents the object’s runtime form, while another represents its serialized form.
In this example,ChallengerWriteReplace plays the role of the “real” object, while ChallengerProxy represents its serialized form:
public class ChallengerProxy implements Serializable {
private static final long serialVersionUID = 1L;
private final long id;
private final String name;
public ChallengerProxy(long id, String name) {
this.id = id;
this.name = name;
}
private Object readResolve() throws ObjectStreamException {
return new ChallengerWriteReplace(id, name);
}
}
class ChallengerWriteReplace implements Serializable {
private static final long serialVersionUID = 1L;
private long id;
private String name;
public ChallengerWriteReplace(long id, String name) {
this.id = id;
this.name = name;
}
private Object writeReplace() throws ObjectStreamException {
return new ChallengerProxy(id, name);
}
}When a ChallengerWriteReplace instance is serialized, its writeReplace method substitutes it with a lightweight ChallengerProxy. The proxy is the only object that is actually written to the byte stream.
During deserialization, the proxy’s readResolve method reconstructs a new ChallengerWriteReplace instance, and the proxy itself is discarded. The application never observes the proxy object directly.
This technique keeps the serialized form decoupled from the internal structure of ChallengerWriteReplace. As long as the proxy remains stable, the main class can evolve freely without breaking previously serialized data. It also provides a controlled point where invariants can be enforced during reconstruction.
Filtering deserialized classes with ObjectInputFilter
I have explained why deserializing untrusted data is dangerous. Introduced in Java 9, the ObjectInputFilter API gives applications a way to restrict which classes are allowed during deserialization:
ObjectInputFilter filter = ObjectInputFilter.Config.createFilter(
"com.example.model.*;!*"
);
try (ObjectInputStream in = new ObjectInputStream(new FileInputStream("data.ser"))) {
in.setObjectInputFilter(filter); // must be set before readObject()
Object obj = in.readObject();
}This filter allows only classes under com.example.model and rejects everything else. The pattern syntax supports allowlisting by package, as well as setting limits on array sizes, object graph depth, and total object count.
Java 9 made it possible to set a process-wide filter via ObjectInputFilter.Config.setSerialFilter or the jdk.serialFilter system property, ensuring that no ObjectInputStream would be left unprotected by default. Java 17 extended this further by introducing filter factories (ObjectInputFilter.Config.setSerialFilterFactory), which allow context‑specific filters to be applied per stream rather than relying on a single global policy. If your application deserializes data that crosses a trust boundary, an input filter is not optional; it is the minimum viable defense.
Java records and serialization
Java records can implement Serializable, but they behave differently from ordinary classes in one critical way: During deserialization, the record’s canonical constructor is called. This means any validation logic in the constructor runs on deserialized data, which is a significant safety advantage:
public record ChallengerRecord(Long id, String name) implements Serializable {
public ChallengerRecord {
if (id == null || name == null) {
throw new IllegalArgumentException(
"id and name must not be null");
}
}
}With a traditional Serializable class, a corrupted or malicious stream could inject null values into fields that the constructor would normally reject. With a record, the constructor acts as a gatekeeper even during deserialization.
Records do not support writeObject, readObject, or serialPersistentFields. Their serialized form is derived entirely from their components, a design decision that intentionally favors predictability and safety over customization.
Alternatives to Java serialization
The Externalizable interface is an alternative to Serializable that gives the class complete control over the byte format. A class that implements Externalizable must define writeExternal and readExternal, and must provide a public no‑argument constructor:
public class ChallengerExt implements Externalizable {
private long id;
private String name;
public ChallengerExt() {} // required
public ChallengerExt(long id, String name) {
this.id = id;
this.name = name;
}
@Override
public void writeExternal(ObjectOutput out) throws IOException {
out.writeLong(id);
out.writeUTF(name);
}
@Override
public void readExternal(ObjectInput in) throws IOException {
this.id = in.readLong();
this.name = in.readUTF();
}
}Unlike Serializable, no field metadata or field values are written automatically. The class descriptor (class name and serialVersionUID) is still written, but the developer is fully responsible for writing and reading all instance state.
Because writeExternal and readExternal work directly with primitives and raw values, fields should use primitive types where possible. Using a wrapper type such as Long with writeLong would throw a NullPointerException if the value were null, since auto‑unboxing cannot handle that case.
This approach can produce more compact output, but the developer is fully responsible for versioning, field ordering, and backward compatibility.
In practice, Externalizable is rarely used in modern Java. When a full control over-the-wire format is needed, most teams choose Protocol Buffers, Avro, or similar schema‑based formats instead.
Conclusion
Java serialization is a low-level JVM mechanism for saving and restoring object state. Known for being powerful but unforgiving, serialization bypasses constructors, assumes stable class definitions, and provides no automatic safety guarantees. Used deliberately in tightly controlled systems, it can be effective. Used casually, it introduces subtle bugs and serious security vulnerabilities. Understanding the trade-offs discussed in this article will help you use serialization correctly and avoid accidental misuse.

