Net-Base Magazine

09.04.2026

Cleanly bring portals, desktop and data together

A portal is only more than an additional frontend if it uses the same business logic, the same permissions and the same data quality as desktop clients and the back office. This article shows how companies can integrate web portals, Delphi-Desktop, REST-Services and data storage...

09.04.2026

Many companies start with a portal for a understandable reason: customers, partners or field staff should be able to initiate processes themselves, retrieve documents, track orders or report faults. At first glance this seems like a pure frontend project. In reality, however, success is decided not by the web UI but by whether portal, desktop client and back office work on the same business truth.

As soon as portal accesses hit the same data basis as a mature desktop application, typical tensions arise: different authorization concepts, competing “authoritative” data, divergent validations, media breaks in approvals or an inconsistent understanding of status and versions. If these issues are not solved cleanly, you unintentionally build a parallel second system — with double maintenance, contradictory processes and ever higher operating costs.

This article describes how companies can bring portals, desktop and data together cleanly: via a clear layered architecture, robust REST services, consistent data models, traceable processes and a pragmatic modernization path for legacy software (often Delphi-based). The goal is an architecture that works today and remains extensible tomorrow — without panic and without a “big bang”.

Why “portal alongside desktop” rarely works

A portal only delivers value once it becomes an integral part of the business application. “On the side” usually means in practice: separate validations, separate user management, separate status logic and often separate reporting. That is hardly noticeable at the beginning, but becomes more expensive with every extension.

Typical symptoms of a parallel world

  • Contradictory data: Master data is maintained differently in the portal than in the desktop. The question “which field is authoritative?” is not answered but bypassed.
  • Different business rules: The desktop validates more (or different) rules than the portal. Errors only surface in downstream processes.
  • Approvals via media breaks: The portal initiates a process, but approval is done by e-mail or manually in the desktop — without an audit trail.
  • Interfaces grow uncontrolled: Instead of a stable API, point-to-point exports/imports or “special endpoints” per portal screen emerge.
  • Version problems: Portal and desktop expect different data structures; releases must be synchronized without a clear compatibility strategy.

The central cause: business logic sits in the wrong layer

In many legacy applications business logic lives in the desktop client: validations, status transitions, calculations, plausibility checks. A portal that accesses the database directly or re-implements the logic cannot be consistent with that. The solution is not “more coordination”, but a technical and functional decoupling: business logic must be placed in a central service layer that both desktop and portal use.

Desired state: One system, multiple clients

When companies connect portals and desktop, the real goal is not “web instead of desktop”, but a shared system that can be operated through multiple UIs. The desktop remains important where complex workflows, large data volumes, special device integration or power-user-oriented concepts are required. The portal is strong for self-service, 24/7 access, roles outside the company and simple, guided processes.

Common building blocks

A viable target uses common core components:

  • Central data model (with clear ownership rules: which entity is maintained where?).
  • Shared business logic (e.g. in REST services), not duplicated in portal and desktop.
  • Consistent rights and role model (RBAC/ABAC depending on complexity).
  • Traceability (audit logging, status histories, “who changed what, when and why”).
  • Versionable APIs (compatibility rules, deprecation, migration paths).

Architectural fundamentals: Layer-3 instead of “directly to the database”

For connecting portal and desktop a Layer-3 architecture has proven effective: presentation (portal/client), business logic (services) and data access (persistence). It is important discipline that clients do not bypass business logic and access tables directly. This applies especially when the desktop historically “did everything itself”.

Layer 1: Clients (Portal, Desktop, possibly Mobile)

Clients should cover UI, interaction and local requirements: validations for better usability make sense, but they do not replace server-side rule checks. For Delphi legacy software this is often the point at which a monolithic VCL client is stepwise transformed into a client that consumes services. For multi-platform requirements part of the functionality can be implemented in new clients while the core remains stable.

Layer 2: Service layer (REST server, background services)

The service layer is the functional middle: authentication, authorization, business rules, transactions, status transitions, document processes, idempotence, concurrency. Here it is decided whether portal and desktop really work together or only share the same database server. A clean REST server is not “a few endpoints”, but a consistent API with a clear domain language.

Layer 3: Data access (SQL, drivers, migrations)

The data access layer encapsulates database details: SQL dialects, transactions, stored procedures, indexes, migrations. Especially in Delphi systems with history, modernization is often needed: moving away from the Borland BDE to modern drivers and consistent access, for example via a BDE replacement with native connection. Only then do you get deployment stability, clear transaction boundaries and a data basis that reliably supports portal-heavy access patterns (many short requests).

REST API as the connecting element — but done right

A REST API is the natural junction to connect portal, desktop and services. Crucial is to design it so it models processes — not just tables.

Resources vs. actions: making domain logic visible

Many APIs start “CRUD-like”. That is acceptable for simple master data, but fails for processes with status, approvals, calculations or side effects. In those cases explicit actions make sense, for example:

  • /orders/{id}/approve instead of setting “Status=Approved” via update
  • /tickets/{id}/assign with checks, permissions, history
  • /documents/{id}/publish with versioning and approval workflow

This makes the system more understandable, testable and consistent between portal and desktop.

Transactions, concurrency and idempotence

Portal accesses are typically short, frequent and parallel. This implies three obligations:

  • Clean transaction boundaries: Every business operation must be atomic, including logging and status transitions.
  • Optimistic concurrency: ETag/RowVersion or similar mechanisms prevent silent overwrites. Desktop and portal should detect conflicts and resolve them deliberately.
  • Idempotent endpoints: Especially for “submit” actions (e.g. orders) repeats due to network issues must be safe.

API versioning without forcing releases

When portal and desktop follow different release cycles, the API needs a clear compatibility strategy. Practically a version-capable API (e.g. /v1/…) is complemented by rules: extensions are backwards-compatible (new fields optional), breaking changes are introduced via new versions, old versions receive deprecation periods. This prevents the desktop from breaking with every portal change — and vice versa.

Rights, roles and multi-tenancy: One model, multiple perspectives

Portals bring new user groups: customers, partners, subcontractors, auditors. Desktop applications are often designed for internal roles. “Simply the same rights” rarely works. The goal is a unified model that covers both worlds.

RBAC as a base, ABAC where necessary

For many B2B systems RBAC (Role-Based Access Control) is sufficient: roles define which actions and data areas are visible. It gets more complex when rights depend on context (tenant, location, contractual relationship, project assignment). Then ABAC (Attribute-Based Access Control) complements the model: decisions depend on attributes of the user and the resource.

Define multi-tenancy cleanly

Multi-tenancy is not just “TenantID in every table”. Relevant aspects are:

  • Data isolation: Who may see which entities? How are cross-connections prevented?
  • Configuration per tenant: Workflows, required fields, document templates, interfaces.
  • Audit and export: Traceability and data provisioning per tenant (e.g. for audits).

Especially in grown data models it is worth treating multi-tenancy as a dedicated work package before building portal features on top.

SSO and identity: Don’t isolate it in the portal

A common mistake is a separate user management in the portal while the desktop continues to authenticate “locally” or via other mechanisms. Better is a central identity strategy: internal users via corporate SSO (e.g. Entra ID/AD), external users via separate policies but within a common identity domain. What matters is not a specific provider but the clear separation of authentication (who are you?) and authorization (what are you allowed to do?).

Data quality and “authoritative data”: Governance instead of gut feeling

If portal and desktop edit the same entities, it must be clear who is authoritative for which data. Without this governance silent inconsistencies arise. A simple but effective method is an ownership matrix:

  • Entity (e.g. customer, contract, item, ticket)
  • Authoritative system (portal, desktop, ERP, CRM)
  • Write permissions (who may create/change?)
  • Synchronization (push, pull, events, time windows)
  • Validation rules (where are checks performed centrally on the server?)

Events and post-processing instead of direct copies

Many processes require post-processing: create a document, trigger an e-mail, transmit data to ERP, sign a PDF, write an index to a DMS. This should not be implemented as “portal does it directly”, but as a service workflow. In practice robust background services (Windows Services or Linux services) are often the right complement to the REST server: the API call triggers work, a worker processes reliably with retry strategy and logging.

Delphi desktop and portal: Modernize without starting over

In many companies Delphi is not a “legacy burden” but the productive basis of critical business processes. The challenge is usually less the compiler than structure, data access and deployment. A portal project is often the right time to refactor the desktop so it becomes service-oriented.

Gradual refactor: Strangler pattern for business logic

Instead of rewriting everything, business logic is iteratively extracted from the client:

  • Phase 1: API for selected core cases (e.g. create ticket, approve order). The desktop uses the API in parallel to the existing path.
  • Phase 2: More processes move to the service layer; the desktop becomes increasingly “UI + offline-near functions”.
  • Phase 3: Old direct DB accesses are reduced; data access and validations are centralized.

The result is not necessarily “web replaces desktop”, but a system that serves both in a controlled manner.

BDE replacement and FireDAC: Foundation for stable services

If the legacy still contains BDE-based data access, this is a risk factor for portal expansion: deployment, driver availability, 64-bit/ARM64 paths and debugging become unnecessarily difficult. A BDE replacement with BDE-Ablosung mit nativer Anbindung (or comparable native access) provides:

  • Clear transactions for API operations
  • Better performance under concurrent portal access
  • Cleaner migration to MariaDB, PostgreSQL or SQL Server
  • More stable deployment in heterogeneous environments

Multiplatform and operations: Desktop, services, ARM64

Many companies now plan more heterogeneously: Windows clients, occasional macOS, server operation on Linux, and mid-term Windows 11 ARM64 in the client environment. This influences decisions early:

  • Native dependencies (drivers, COM, reporting components) must be checked for platform compatibility.
  • Service operation (Linux services) can be useful for integrations and worker jobs, while the desktop remains Windows-focused.
  • API-first reduces platform coupling: new clients speak the same interface.

Integrations: ERP, DMS, CRM — clean via interfaces instead of data copies

Portals are rarely standalone. They often need to create ERP orders, read CRM accounts, write documents to a DMS or fetch shipping data from logistics providers. The more systems involved, the more important a clear integration style becomes.

Interface governance

Important questions to decide before implementation include:

  • Which source is authoritative? (ERP leads prices, CRM leads contacts etc.)
  • Synchronous or asynchronous? (real-time for validation, asynchronous for transfers)
  • Error handling: What happens on partial failures? Are there queues, retries, dead-letter?
  • Logging: Which data is stored in an audit-proof manner?

Documents, reporting and PDF workflows

A portal often produces document printing and download areas: delivery notes, invoices, protocols, certificates, confirmations. Technically this is more than “generate PDF”: versioning, approvals, traceability, access rights and retention periods come into play. A proven approach is a document service that manages metadata (version, status, visibility) and centrally controls generation (rendering) — instead of building PDFs “somewhere” in the portal frontend.

Operations and security: What matters in everyday use

When portal and desktop rely on a core system, operational relevance increases. The architecture must therefore be not only functional but also operable.

Logging, monitoring, audit

For B2B systems three levels are important:

  • Technical logging (request IDs, errors, runtimes)
  • Business logging (status changes, approvals, relevant decisions)
  • Audit trail (who changed which data, incl. before/after as needed)

A portal without a reliable audit trail will sooner or later cause discussions with business units, auditors or customers — especially regarding approvals and contract data.

Rate limiting and protection against abuse

Portals are more public than desktop applications. Even if only customers access them, the system must handle faulty clients, accidental load or automated requests. Rate limiting, sensible payload limits, upload validation and clear timeouts protect not only from attackers but from instability in daily operations.

Database performance under portal load

Portal accesses often produce many small reads, filters, pagination and searches. Common pitfalls are missing indexes, too broad SELECTs, “N+1” queries or unclear sorting. Data access should therefore be consistently designed for:

  • paginated lists (server-side, stably sorted)
  • targeted projections (only required fields)
  • filters with indexes (especially tenant, status, date)
  • cache strategies (for master data where allowed)

A pragmatic roadmap for companies

“Bringing portals, desktop and data together cleanly” is a program, not a single ticket. At the same time it must proceed in manageable steps so that business areas quickly see benefits. A proven sequence looks like this:

1) Current state analysis: data, processes, pain points

Which entities are critical? Where do conflicts arise? Which roles need access? Which integrations are mandatory? The result should be a prioritized list of core business processes, not just a UI wish list.

2) Target architecture: fix service layer and rights concept

Before the portal becomes “pretty”, it must be decided how auth/authorization, transactions, audit and versioning are solved. These are the guardrails that massively influence later costs.

3) Pilot process: an end-to-end flow

A sensible pilot is a process that touches portal, service, data and possibly documents (e.g. create a ticket including attachment and internal processing). This tests architecture and operations under real conditions.

4) Expansion: process families instead of single functions

Afterwards not individual screens are implemented but coherent process chains: e.g. “inquiry → offer → approval → order” or “incident → analysis → feedback → closure”. This reduces interface proliferation and increases consistency.

5) Modernization of the desktop: stepwise, measurable

In parallel the desktop is refactored to use the same service logic. This reduces double implementation and simplifies operations because there is one business source.

Conclusion: Consistency is the real portal feature

A portal is not “another access” to the database, but an additional client for the same system. Those who want to connect portals and desktop must centralize business logic, rights, data models and operations consistently: via a Layer-3 architecture, a robust REST server, clear ownership rules for data and a modernization strategy that does not devalue legacy software but structurally improves it. The result is less friction in daily work, better extensibility and a platform that can absorb new channels (partners, mobile, services) without an architectural break.

If you would like to clarify how a portal can be cleanly connected to your existing desktop and data landscape (including REST API, role model, data access and stepwise modernization), talk to us: https://net-base-software-gmbh.de/kontakt/