Net-Base Magazine

11.04.2026

Replacing Borland BDE with FireDAC: A Guide for Safe Delphi Modernization Without a Big Bang

Many Delphi legacy applications still use the Borland Database Engine (BDE) – often stable, but with growing risks around deployment, 64‑bit, security, and modern database strategy. This article shows how companies can gradually and in a controlled manner replace the BDE with FireDAC...

11.04.2026

In many companies the Borland Database Engine (BDE) is still part of business-critical Delphi applications: accumulated domain logic, UI-near data access with TTable/TQuery, sometimes still Paradox/dBase, sometimes early client/server installations. The common reality is often: the software works, users know the processes, and in day-to-day operations there is no immediate reason to “touch anything”. At the same time the technical foundation is changing: operating systems are being hardened, deployment is being standardized, 64‑bit is expected, and data storage should be on database servers with a proper rights and backup concept.

Right at this point, “replace Borland BDE with BDE replacement with native integration” becomes a strategic modernization task. BDE-Ablosung mit nativer Anbindung is the established data access for modern databases in current Delphi versions. It delivers consistent behavior, robust drivers, Unicode support, monitoring/tracing and an architecture that can serve desktop clients as well as services and REST servers. The migration is rarely just a 1:1 component swap—especially not when the legacy application has baked in BDE-specific behavior over the years (transaction assumptions, data formats, filters/sorts, cached updates, third-party reports).

This article focuses on the practical approach: how to replace the BDE with FireDAC without endangering the business logic and without forcing a big-bang relaunch? You will receive an actionable model, technical target pictures and notes on typical problem areas in enterprise operation.

Why replacing the BDE today is more than maintenance

As long as a BDE application works, a replacement appears to be mere “code cleanup”. In practice, however, the pressure usually arises from operational and risk issues.

Deployment, security baselines and “no-touch” clients

The BDE was historically designed for local configuration (BDE Administrator, alias definitions, NetDir, shared configuration files). In modern environments manual steps and machine-wide settings are hard to reconcile with software distribution, hardening and auditability. FireDAC allows much more controllable deployments because connection parameters and driver settings can be managed close to the application.

64‑bit, Windows modernization and new platform targets

Once an application has to run in 64‑bit (memory requirements, driver/Office ecosystem, new hardware, terminal server strategies), the BDE effectively becomes a blocker. FireDAC supports 32/64‑bit consistently and is therefore a core building block of any Delphi modernization that must not fail on data access. Incidentally, topics like Windows 11 ARM64 and hybrid client/service architectures only become reliably plannable with this in place.

Database strategy: away from file-based, toward server-based

Many BDE applications still carry legacy from Paradox/dBase times. These file databases are more vulnerable in multi-user operation, administratively harder to back up and poorly suited to today’s requirements (roles/permissions, encryption, monitoring, high availability). FireDAC is not “the new Paradox driver”, but the modern gateway to SQL Server, PostgreSQL, MariaDB and Firebird. In practice, therefore, the BDE replacement is often the start signal to professionalize data storage and operation.

Maintainability and diagnosability in operation

An underestimated cost factor is troubleshooting: sporadic locking issues, inconsistent cursor behavior, hard-to-trace parameter conversions or network/path issues. FireDAC offers better starting points for reproducible error analysis with logging, monitoring and clearer type behavior. For companies that intend to operate an application long-term and extend it selectively, this is an immediate benefit.

BDE vs. FireDAC: differences that matter in migration

On paper components can be mapped. In reality it’s about behavior changes that can create business-side side effects. A brief orientation:

Component mapping (as a starting point)

  • TDatabase (BDE) → TFDConnection (FireDAC)
  • TQuery (BDE) → TFDQuery
  • TTable (BDE) → TFDTable (in modernizations often better: query-/view-based access)
  • TStoredProc (BDE) → TFDStoredProc

The most common behavioral differences

  • Parameters and data types: FireDAC is more precise. “It’ll probably work” SQL becomes visible sooner (e.g. dates as strings, implicit conversions, unclear nullability).
  • Transactions: Legacy code often contains implicit commit assumptions (closing a dataset, auto-commit-like patterns, cached updates). With FireDAC deliberate transaction control pays off because it improves business consistency.
  • Cursor/fetch: FireDAC has different defaults and more knobs. Inefficient patterns (large resultsets for UI lists) become more apparent but can be optimized deliberately.
  • Unicode: In modern Delphi versions Unicode is standard. The FireDAC chain (client library, connection options, DB collation, field types) must be consistent, otherwise character and comparison issues may occur.
  • Deployment: Depending on the DB, client libraries are required (e.g. libpq for PostgreSQL). That must be planned early, otherwise surprises may appear close to production.

Target picture for a FireDAC architecture: stable, testable, extensible

A BDE replacement should not end in “FireDAC everywhere somehow”. A viable target picture is especially valuable if the application will be further developed or embedded in services/portals.

Minimal goal: unified connection layer

Instead of distributed connections in forms, a central connection layer is recommended:

  • Creation and configuration of TFDConnection in one place
  • Unified timeouts, encoding/character set, error handling
  • Switching Dev/Test/Prod without manual rework
  • Optional: centralized activation of tracing/monitoring for diagnostics

Recommended: clear transaction boundaries in business logic

Many legacy applications spread data changes across UI events. That increases the risk of partial updates and makes testing harder. A robust FireDAC approach is: the use case (service/business logic) starts and ends the transaction, not the UI. Even in pure VCL desktop software this creates a robust core that is later easier to reuse as a service or API.

Extensible toward services and REST

Anyone who later adds a REST server, operates Windows or Linux services, or wants to connect a customer portal benefits from a clean data layer. FireDAC is suitable for this if connection management, error handling and—depending on server load—pooling are at least considered as a target. This does not have to be implemented in the first step, but the architecture should not block it.

Migration strategy: introduce FireDAC gradually, decommission BDE in a controlled way

In B2B environments a big bang is rarely realistic: too many business processes, too much operational responsibility, too little acceptance for long downtimes. A stepwise BDE replacement is generally the safer route.

Phase 1: inventory and risk map

A useful inventory counts not only components but evaluates behavior and couplings:

  • Which database(s) are used: Paradox/dBase, Firebird/InterBase, SQL Server, PostgreSQL, MariaDB?
  • Where are TTable accesses, where is SQL used via TQuery, where are stored procedures used?
  • How are transactions handled today (explicit, implicit, cached updates, mixed patterns)?
  • Which reports/exports expect certain dataset properties (sorting, filtering, calculated fields)?
  • Which third-party components or in-house frameworks are BDE-specific?

From this map it becomes apparent whether the replacement only affects access or whether a database refactor (e.g. Paradox → SQL Server/PostgreSQL/MariaDB) is sensible or necessary in parallel.

Phase 2: FireDAC foundation (without UI changes)

Before you migrate screens, FireDAC should be technically in place:

  • Central DataModule or service class with TFDConnection
  • Configuration model for connection strings (e.g. INI/JSON) and clean secrets management
  • Standardized error handling (convert DB exceptions into understandable, loggable messages)
  • Tracing/monitoring options for pilot operation (activatable on demand, not permanently “loud”)

It is important that binding standards emerge from this: naming conventions, parameter rules, logging schema, default settings per database.

Phase 3: pilot module with real business relevance

A good pilot area is functionally bounded yet actually used. Goal: develop and verify patterns.

  • TQueryTFDQuery (including parameterization and typing)
  • Define transaction scope and make it visible in the code
  • Prove result equality (compare business-relevant result sets)
  • Measure performance (response times, DB load, network traffic)

At the end of the pilot there should be an internal checklist by which every further module is migrated. This reduces risk and makes effort plannable.

Phase 4: mass migration and deployment cleanup

After the pilot, modules are switched over one by one. In parallel the BDE is removed as an operational dependency:

  • Remove installer scripts and documentation for BDE setups
  • Eliminate alias definitions, NetDir configuration and special paths
  • Align build/release pipeline to new dependencies (client libs, drivers)

This cleanup is essential: as long as BDE parts survive in deployment, operational risk remains.

Pitfalls: common causes of business side effects

Many migrations fail not because of FireDAC, but because of implicit assumptions in legacy code. These areas should be prioritized early.

SQL dialects and historically grown SQL

BDE applications often contain SQL that “accidentally” worked with a particular driver: implicit joins, inconsistent alias usage, DB-specific functions, unclear sort orders. In migration:

  • Make SQL explicit (JOIN syntax instead of implicit WHERE-based joins)
  • Check reserved words and identifiers (e.g. DATE, USER, ORDER as column names)
  • Unify or encapsulate date/time and string functions

FireDAC offers adaptation options, but the sustainable right solution is DB-compliant, readable SQL.

Data type mapping: Boolean, date/time, memo/blob, NULL

The BDE often interpreted things liberally. FireDAC is more precise—which is good but requires rules. Typical topics:

  • Boolean: BIT/SMALLINT/CHAR(1) – define clearly, avoid implicit conversions
  • Date/time: DATETIME vs. DATETIME2, milliseconds, sort/compare logic; timezone questions in distributed systems
  • Memo/blob: fetch behavior (OnDemand), encoding, client memory consumption
  • NULLability: legacy code that mixes empty strings and NULL leads to subtle logic bugs

A lean data type catalogue has proven useful: per business-critical table/column target types (DB and Delphi) plus rules for NULL, defaults and formatting.

Transactions: from implicit to deliberately orchestrated

A common error in legacy Delphi projects is that the system relied on implicit commits (“if I close the dataset, it’s saved”). FireDAC provides clear APIs (StartTransaction, Commit, Rollback). The modernization benefit arises when transactions are understood as a business frame:

  • Use case starts the transaction
  • Multiple updates run within the same connection
  • Commit/rollback happens centrally with traceable error handling

This reduces inconsistencies and is crucial if the application is later extended with services or interfaces.

Cached updates and conflict handling (concurrency)

Many BDE applications used cached updates as an “offline edit” mechanism. FireDAC can do similar, but the rules must be explicit:

  • Which fields are keys, which serve concurrency checks?
  • How are conflicts resolved (rowversion/timestamp, “last write wins”, user decision)?
  • What happens on partial failures in batch operations?

In modernizations it is often sensible to move conflict logic closer to business logic or into a service layer rather than hiding it solely in UI dataset behavior.

TTable/Paradox-heavy applications: FireDAC is not the only concern

If the application is heavily dependent on file-based access (TTable against Paradox), “BDE with FireDAC” is only part of the story. FireDAC is primarily intended for SQL databases. The central decision then is: will data storage be modernized to a server DB?

  • Migration to SQL Server, PostgreSQL or MariaDB
  • Introduction of a roles/permissions concept and proper backup/restore processes
  • Stable multi-user operation without file locking problems

If an immediate database change is not organizationally possible, a two-step approach is often pragmatic: first stabilize the access layer and reduce UI coupling, then perform the data migration with a clear test and cutover strategy.

Reporting, exports and third-party components

Reports often depend on details: sort orders, filter precedence, calculated fields, master/detail behavior. For a controlled transition:

  • Identify critical reports and treat them as a regression test suite
  • Produce datasets for reports deterministically (views/stored procedures or well-defined queries)
  • Reduce UI-side filter chains that depend on dataset behavior

The goal is reproducible result equality, especially for audit-relevant analyses.

Architecture upgrade during the FireDAC migration: decouple pragmatically

The BDE replacement is a good moment to extract data access from forms and event handlers. This does not mean a complete re-architecture project is necessary. Even moderate measures often have large effects.

Pragmatic target structure (compatible with layer-3 architecture)

  • Connection/unit-of-work: manages connection and transaction, provides query objects
  • Repository/DAO: encapsulates SQL and data access per business area
  • Service/use case: orchestrates business logic, validations and transaction scope

This structure is compatible with a later layer-3 architecture and eases follow-up projects: REST interfaces, background services, multiplatform clients or portal coupling.

Important effect: fewer global side effects

Many BDE projects work with global data modules and implicit states. FireDAC works that way too, but modernization becomes more stable if states are localized: clear lifecycle of connection/transaction, reproducible error paths, fewer side effects from global state.

Performance and stability: configure FireDAC deliberately

FireDAC is powerful, but performance is a combination of SQL, indexing, fetch strategy and connection management. In migrations it frequently becomes apparent: the BDE covered inefficient patterns because data volumes used to be smaller or the system ran locally.

Fetch strategies and UI lists

  • Load only needed columns for lists (no SELECT *)
  • Server-side sorting and targeted filters instead of client-side chains
  • For large datasets: paging or incremental loading
  • LOB fields (memo/blob) load only when actually needed

FireDAC offers suitable options; decisive is the business decision which data a user actually needs in each context.

Prepared statements and parameterization

Parameterized queries are not only a security standard (to avoid SQL injection) but also improve plan reuse in many databases. Additionally, type sloppiness in legacy code becomes visible and can be corrected deliberately. Especially in grown systems this is a quality gain that pays off in fewer edge cases and better diagnostics.

Connection management: desktop vs. service/REST

In classic desktop clients a long-lived connection per client is often practical. In services or REST servers other patterns are common: short-lived requests, parallel access, connection pooling. If you see the BDE replacement as part of a larger modernization, you should consider these differences in the target picture so later expansions do not have to start again at data access.

Test and acceptance strategy: prove result equality

In a BDE replacement the main risk is rarely “the application won’t start”, but subtle business deviations: sort orders, rounding, NULL handling, transaction boundaries, side effects of triggers/constraints in modern DBs. A viable test strategy includes:

  • SQL regression: execute critical queries against defined test data and compare result sets
  • Use-case tests: verify core processes (e.g. posting, approving, reversing, import/export) with expected values
  • Multi-user/stability tests: lock behavior, deadlocks, timeouts, transaction duration
  • Logging/observability: capture DB errors structurally (error codes, context, affected query), not just a “error dialog”

Companies benefit doubly here: the tests secure the migration and create a basis to roll out later changes to the data model or interfaces in a controlled way.

Target databases in FireDAC projects: typical options

FireDAC is intentionally broad, but each database brings its own rules. In modernizations the following targets are common:

SQL Server

Typical in Windows-dominated IT landscapes. Important points: consistent Unicode types (NVARCHAR), modern time types (DATETIME2), clear identity/sequence strategy, defined isolation levels and a clean handling of locks.

PostgreSQL

Strong on integrity and features. In migrations relevant: identifier case sensitivity, data types (boolean/uuid/jsonb) and dialect differences. FireDAC can connect PostgreSQL productively if client libraries and deployment are organized cleanly.

MariaDB/MySQL

Common when desktop software interacts with web or portal components. Important: consistent utf8mb4, InnoDB as engine, clear transaction and index strategy. FireDAC supports MariaDB/MySQL reliably when parameters and types are clearly defined.

Regardless of the target, a BDE replacement is most stable when database standards are established in parallel (schema versioning, migration scripts, roles/permissions, backup/restore, monitoring).

Practical recommendations for a plannable FireDAC migration

Reduce dependencies before you switch many components

If SQL and dataset logic are embedded in many forms, each change becomes expensive. An intermediate step that consolidates SQL into a few access classes reduces the migration surface significantly. After that the actual switch to FireDAC is often faster and less risky.

Migrate an early transactional core process

“Simple lists” are convenient as an entry point, but it reduces risk to migrate early a process with real updates and dependencies. If transactions, data types and error paths are clean there, the rest of the migration becomes more plannable.

Treat deployment as equal work

Code changes are only half the story. Clarify early:

  • Which client libraries/drivers are needed per database?
  • How are these versioned, signed (if applicable) and rolled out?
  • How are connection parameters managed, and who is allowed to change them?
  • What does the support process look like when DB access fails?

Use FireDAC as a modernization anchor—without starting over

The replacement is an opportunity for targeted quality levers: parameterization, transaction boundaries, logging, consistent error texts. This reduces operating costs and makes later extensions (interfaces, services) much less risky, without reinventing the application’s business functionality.

Conclusion: BDE replacement with FireDAC is controllable modernization—if treated as an architectural topic

The BDE has supported many Delphi applications for years. Today, however, it is a structural risk: for 64‑bit, for standardized deployment, for modern security requirements and for connecting to contemporary databases. FireDAC is the suitable successor, but not as a “component swap overnight”. The safe route is a stepwise migration with a solid foundation, a pilot module, binding rules for data types and transactions and tests that prove result equality.

If you would like to plan the BDE replacement in a structured way—including inventory analysis, migration path and FireDAC target architecture—the most sensible next step is a technical alignment of your framework conditions: https://net-base-software-gmbh.de/kontakt/