Skip to main content

Generalization and Serialization in C# — Writing Code That Reuses and Remembers

Hello, .NET developers! 👋

Every real application shares two silent goals — reusability and portability. Reusability comes from writing code that can handle different types without rewriting logic. Portability comes from turning objects into transferable data so they can move across files, APIs, or networks. In C#, these two ideas are embodied by Generics (for generalization) and Serialization (for object persistence).


Understanding Generalization — Making Code Reusable and Type-Safe

Generalization means creating a design that works with multiple data types while keeping strong type safety. In C#, the tool for this is Generics. Instead of writing separate versions of the same class or method for different types, you define one version that adapts to any type at compile time.

Example: Generic Repository

public class Repository<T>
{
    private readonly List<T> _items = new();

    public void Add(T item) => _items.Add(item);
    public IEnumerable<T> GetAll() => _items;
}

public class Customer
{
    public string Name { get; set; }
}

public class Product
{
    public string Title { get; set; }
}

class Program
{
    static void Main()
    {
        var customerRepo = new Repository<Customer>();
        customerRepo.Add(new Customer { Name = "Bhargavi" });

        var productRepo = new Repository<Product>();
        productRepo.Add(new Product { Title = "Laptop" });

        Console.WriteLine(customerRepo.GetAll().First().Name);
    }
}

Here, one generic repository works for both Customer and Product. That’s generalization in action — your logic is abstract, but your type safety remains intact. If you accidentally try to add a Product into a Repository<Customer>, the compiler will stop you immediately.

Generics are the backbone of modern C# libraries — from List<T> and Dictionary<TKey,TValue> to dependency injection containers. They prevent runtime casting errors and deliver performance benefits because no boxing or reflection is needed for type conversion.


Constraints — Guiding the Type Parameter

Sometimes, you need to tell the compiler what your generic type can or must do. That’s where constraints come in. They narrow down what types are allowed as T.

Example: Adding a Constraint

public interface IEntity
{
    int Id { get; set; }
}

public class Repository<T> where T : IEntity
{
    private readonly List<T> _items = new();

    public void Add(T item)
    {
        _items.Add(item);
        Console.WriteLine($"Added entity with ID: {item.Id}");
    }
}

The constraint where T : IEntity ensures that only types implementing IEntity can be stored. This lets you safely access item.Id inside the generic class without reflection or dynamic typing.


Real-World Analogy — The Generic Toolbox

Think of generics like a toolbox that fits interchangeable tools. The toolbox (generic class) is built once, but the tools (types) change depending on the task. You can carry the same box to fix different problems without rebuilding it each time.


Serialization — Turning Objects into Data

Now let’s switch gears. If generalization helps us reuse logic, serialization helps us reuse data. Serialization is the process of converting an object into a format that can be stored (in a file or database) or transmitted (across a network). Deserialization reverses that process — it rebuilds the object from that data.

Common formats include JSON, XML, and binary. In modern .NET, System.Text.Json is the preferred library for JSON serialization — fast, lightweight, and built-in.

Example: JSON Serialization

using System.Text.Json;

public class Employee
{
    public int Id { get; set; }
    public string Name { get; set; }
    public string Department { get; set; }
}

class Program
{
    static void Main()
    {
        var emp = new Employee { Id = 1, Name = "Anita", Department = "Finance" };

        // Serialize
        string json = JsonSerializer.Serialize(emp);
        Console.WriteLine("Serialized JSON:");
        Console.WriteLine(json);

        // Deserialize
        var copy = JsonSerializer.Deserialize<Employee>(json);
        Console.WriteLine($"Deserialized Employee: {copy.Name} from {copy.Department}");
    }
}

The object is first serialized into JSON text and then reconstructed back into an Employee object. This makes it easy to send employee data through APIs or store it in a file for later use.


The Hidden Magic — Reflection and Attributes

Serialization in .NET relies heavily on reflection — the runtime mechanism that inspects object properties and fields. Attributes let you customize how serialization behaves. For example, you can rename a property, ignore a field, or format a date differently.

Example: Customizing Serialization

using System.Text.Json.Serialization;

public class Order
{
    [JsonPropertyName("order_id")]
    public int Id { get; set; }

    [JsonIgnore]
    public string InternalNote { get; set; }

    public string Product { get; set; }
}

Here, Id will appear in JSON as order_id, and InternalNote will be skipped entirely. These small details matter when integrating with third-party APIs or following naming conventions in microservices.


Real-World Scenario — Combining Both

Let’s connect generalization and serialization together. Imagine you’re building a data-sync service that pulls records from different systems — Customers, Orders, and Payments — and stores them as JSON files. With generics, you can create a single serializer service that handles any entity type.

Example: Generic Serializer Service

public class JsonFileSerializer<T>
{
    public void Save(string filePath, T data)
    {
        var json = JsonSerializer.Serialize(data, new JsonSerializerOptions { WriteIndented = true });
        File.WriteAllText(filePath, json);
        Console.WriteLine($"Saved {typeof(T).Name} data to {filePath}");
    }

    public T Load(string filePath)
    {
        var json = File.ReadAllText(filePath);
        return JsonSerializer.Deserialize<T>(json);
    }
}

Now, your app can handle any model with one reusable class:

var customerSerializer = new JsonFileSerializer<Customer>();
customerSerializer.Save("customer.json", new Customer { Name = "Bhargavi" });

var orderSerializer = new JsonFileSerializer<Order>();
orderSerializer.Save("order.json", new Order { Product = "Laptop" });

This pattern powers many enterprise systems — reusable serialization logic that adapts to any domain model using generics. It’s flexible, type-safe, and fully aligned with clean-architecture principles.


Wrapping Up

Generics make your code flexible and maintainable; serialization makes your data portable and persistent. Together, they let your software think in types but speak in data. Whenever you design a system that needs both reusability and communication — think APIs, data pipelines, or microservices — you’ll find these two concepts working hand in hand.

Comments

Popular posts from this blog

Implementing and Integrating RabbitMQ in .NET Core Application: Shopping Cart and Order API

RabbitMQ is a robust message broker that enables communication between services in a decoupled, reliable manner. In this guide, we’ll implement RabbitMQ in a .NET Core application to connect two microservices: Shopping Cart API (Producer) and Order API (Consumer). 1. Prerequisites Install RabbitMQ locally or on a server. Default Management UI: http://localhost:15672 Default Credentials: guest/guest Install the RabbitMQ.Client package for .NET: dotnet add package RabbitMQ.Client 2. Architecture Overview Shopping Cart API (Producer): Sends a message when a user places an order. RabbitMQ : Acts as the broker to hold the message. Order API (Consumer): Receives the message and processes the order. 3. RabbitMQ Producer: Shopping Cart API Step 1: Install RabbitMQ.Client Ensure the RabbitMQ client library is installed: dotnet add package RabbitMQ.Client Step 2: Create the Producer Service Add a RabbitMQProducer class to send messages. RabbitMQProducer.cs : using RabbitMQ.Client; usin...

How Does My .NET Core Application Build Once and Run Everywhere?

One of the most powerful features of .NET Core is its cross-platform nature. Unlike the traditional .NET Framework, which was limited to Windows, .NET Core allows you to build your application once and run it on Windows , Linux , or macOS . This makes it an excellent choice for modern, scalable, and portable applications. In this blog, we’ll explore how .NET Core achieves this, the underlying architecture, and how you can leverage it to make your applications truly cross-platform. Key Features of .NET Core for Cross-Platform Development Platform Independence : .NET Core Runtime is available for multiple platforms (Windows, Linux, macOS). Applications can run seamlessly without platform-specific adjustments. Build Once, Run Anywhere : Compile your code once and deploy it on any OS with minimal effort. Self-Contained Deployment : .NET Core apps can include the runtime in the deployment package, making them independent of the host system's installed runtime. Standardized Libraries ...

Clean Architecture: What It Is and How It Differs from Microservices

In the tech world, buzzwords like   Clean Architecture   and   Microservices   often dominate discussions about building scalable, maintainable applications. But what exactly is Clean Architecture? How does it compare to Microservices? And most importantly, is it more efficient? Let’s break it all down, from understanding the core principles of Clean Architecture to comparing it with Microservices. By the end of this blog, you’ll know when to use each and why Clean Architecture might just be the silent hero your projects need. What is Clean Architecture? Clean Architecture  is a design paradigm introduced by Robert C. Martin (Uncle Bob) in his book  Clean Architecture: A Craftsman’s Guide to Software Structure and Design . It’s an evolution of layered architecture, focusing on organizing code in a way that makes it  flexible ,  testable , and  easy to maintain . Core Principles of Clean Architecture Dependency Inversion : High-level modules s...