Introduction to Data Engineering Concepts |2| Understanding Data Sources and Ingestion




Free Resources


Free Apache Iceberg Course


Free Copy of “Apache Iceberg: The Definitive Guide”


Free Copy of “Apache Polaris: The Definitive Guide”


2025 Apache Iceberg Architecture Guide


How to Join the Iceberg Community


Iceberg Lakehouse Engineering Video Playlist


Ultim...

? https://www.roastdev.com/post/....introduction-to-data

#news #tech #development

Favicon 
www.roastdev.com

Introduction to Data Engineering Concepts |2| Understanding Data Sources and Ingestion

Free Resources


Free Apache Iceberg Course


Free Copy of “Apache Iceberg: The Definitive Guide”


Free Copy of “Apache Polaris: The Definitive Guide”


2025 Apache Iceberg Architecture Guide


How to Join the Iceberg Community


Iceberg Lakehouse Engineering Video Playlist


Ultimate Apache Iceberg Resource Guide
Before we can analyze, model, or visualize data, we first need to get it into our systems. This step—often taken for granted—is known as data ingestion. It’s the bridge between the outside world and the internal data infrastructure, and it plays a critical role in how data is shaped from day one.In this post, we’ll break down the types of data sources you’ll encounter, the ingestion strategies available, and what trade-offs to consider when designing ingestion workflows.


What Are Data Sources?
At its core, a data source is any origin point from which data can be extracted. These sources vary widely in structure, velocity, and complexity.Relational databases like MySQL or PostgreSQL are common sources in transactional systems. They tend to produce highly structured, row-based data and are often central to business operations such as order processing or customer management.APIs are another rich source of data, especially in modern SaaS environments. From financial data to social media feeds, APIs expose endpoints where structured (often JSON-formatted) data can be requested in real-time or on a schedule.Then there are flat files—CSV, JSON, XML—often used in data exports, logs, and external data sharing. While simple, they can carry critical context or fill gaps that structured sources miss.Sensor data, clickstreams, mobile apps, third-party tools, and message queues all add to the landscape, each bringing its own cadence and complexity.


Ingestion Strategies: Batch vs Streaming
Once you identify your sources, the next question becomes: how will you ingest the data?Batch ingestion involves collecting data at intervals and processing it in chunks. This could be once a day, every hour, or even every minute. It's suitable for systems that don't require real-time updates and where data can afford to be a little stale. For example, nightly financial reports or end-of-day sales data.Batch processes tend to be simpler and easier to maintain. They can rely on traditional extract-transform-load (ETL) workflows and are often orchestrated using tools like Apache Airflow or simple cron jobs.Streaming ingestion, on the other hand, handles data in motion. As new records are created—say, a customer clicks a link or a sensor detects a temperature change—they’re ingested immediately. This method is crucial for use cases that require low-latency or real-time processing, such as fraud detection or live recommendation engines.Apache Kafka is a popular tool for enabling streaming pipelines. It allows systems to publish and subscribe to streams of records, ensuring data flows continuously with minimal delay.


Structured, Semi-Structured, and Unstructured Data
Understanding the shape of your data also influences how you ingest it.Structured data is highly organized and fits neatly into tables. Think SQL databases or CSV files. Ingestion here often involves direct connections via JDBC drivers, SQL queries, or file uploads.Semi-structured data, like JSON or XML, has an internal structure but doesn’t conform strictly to relational models. Ingesting this data may require parsing logic and schema inference before it's usable downstream.Unstructured data includes images, videos, PDFs, and raw text. These formats typically require specialized tools and more complex handling, often involving metadata extraction or integration with machine learning models for classification or tagging.


Considerations in Designing Ingestion Pipelines
Data ingestion isn’t just about moving bytes—it’s about doing so reliably, efficiently, and with the future in mind.Latency requirements play a major role. Does the business need data as it happens, or is yesterday’s data good enough? That determines your choice between batch and streaming.Scalability is another concern. What works for 10,000 records a day might break under 10 million. Tools like Kafka and cloud-native services such as AWS Kinesis or Google Pub/Sub help handle high throughput without compromising performance.Error handling is essential. What happens if a source API goes down? What if a file arrives with missing fields? Designing retry logic, alerts, and fallback mechanisms helps ensure ingestion pipelines are robust.Finally, schema evolution can’t be overlooked. Data changes over time—columns get added, data types shift. Your ingestion pipeline must be flexible enough to adapt without breaking downstream systems.


Looking Ahead
Getting data into the system is just the beginning. Once it’s ingested, it often needs to be transformed to fit the analytical or business context.In the next post, we’ll explore the concepts of ETL and ELT—two core paradigms for moving and transforming data—and look at how they differ in practice and purpose.

Similar Posts

Similar

Introduction to Data Engineering Concepts |1| What is Data Engineering?




Free Resources


Free Apache Iceberg Course


Free Copy of “Apache Iceberg: The Definitive Guide”


Free Copy of “Apache Polaris: The Definitive Guide”


2025 Apache Iceberg Architecture Guide


How to Join the Iceberg Community


Iceberg Lakehouse Engineering Video Playlist


Ultim...

? https://www.roastdev.com/post/....introduction-to-data

#news #tech #development

Favicon 
www.roastdev.com

Introduction to Data Engineering Concepts |1| What is Data Engineering?

Free Resources


Free Apache Iceberg Course


Free Copy of “Apache Iceberg: The Definitive Guide”


Free Copy of “Apache Polaris: The Definitive Guide”


2025 Apache Iceberg Architecture Guide


How to Join the Iceberg Community


Iceberg Lakehouse Engineering Video Playlist


Ultimate Apache Iceberg Resource Guide
Data engineering sits at the heart of modern data-driven organizations. While data science often grabs headlines with predictive models and AI, it's the data engineer who builds and maintains the infrastructure that makes all of that possible. In this first post of our series, we’ll explore what data engineering is, why it matters, and how it fits into the broader data ecosystem.


The Role of the Data Engineer
Think of a data engineer as the architect and builder of the data highways. These professionals design, construct, and maintain systems that move, transform, and store data efficiently. Their job is to ensure that data flows from various sources into data warehouses or lakes where it can be used reliably for analysis, reporting, and machine learning.In a practical sense, this means working with pipelines that connect everything from transactional databases and API feeds to large-scale storage systems. Data engineers work closely with data analysts, scientists, and platform teams to ensure the data is clean, consistent, and available when needed.


From Raw to Refined: The Journey of Data
Raw data is rarely useful as-is. It often arrives incomplete, messy, or inconsistently formatted. Data engineers are responsible for shepherding this raw material through a series of processing stages to prepare it for consumption.This involves tasks like:
Data ingestion (bringing data in from various sources)
Data transformation (cleaning, enriching, and reshaping the data)
Data storage (choosing optimal formats and storage solutions)
Data delivery (ensuring end users can access data quickly and easily)
At each stage, considerations around scalability, performance, security, and governance come into play.


Data Engineering vs Data Science
It's common to see some confusion between the roles of data engineers and data scientists. While their work is often complementary, their responsibilities are distinct.A data scientist focuses on analyzing data and building predictive models. Their tools often include Python, R, and statistical frameworks. On the other hand, data engineers build the systems that make the data usable in the first place. They are often more focused on infrastructure, system design, and optimization.In short: the data scientist asks questions; the data engineer ensures the data is ready to answer them.


A Brief History of the Data Stack
The evolution of data engineering can be seen in how the data stack has changed over time.In traditional environments, organizations relied heavily on ETL tools to move data from relational databases into on-premise warehouses. These systems were tightly controlled but not particularly flexible or scalable.With the rise of big data, open-source tools like Hadoop and Spark introduced new ways to process data at scale. More recently, cloud-native services and modern orchestration frameworks have enabled even more agility and scalability in data workflows.This evolution has led to concepts like the modern data stack and data lakehouse—topics we’ll cover later in this series.


Why It Matters
Every modern organization depends on data. But without a solid foundation, data becomes a liability rather than an asset. Poorly managed data can lead to flawed insights, compliance issues, and lost opportunities.Good data engineering practices ensure that data is:
Accurate and timely
Secure and compliant
Scalable and performant
In a world where data volumes and velocity are only increasing, the importance of data engineering will only continue to grow.


What’s Next
Now that we’ve outlined the role and importance of data engineering, the next step is to explore how data gets into a system in the first place. In the next post, we’ll dig into data sources and the ingestion process—how data flows from the outside world into your ecosystem.
Similar

? Oracle E-Business Suite R12.2: Adding and Removing Application Tier Nodes

? Oracle E-Business Suite R12.2? Overview
Oracle E-Business Suite (EBS) Release 12.2 introduces a dual file system architecture with online patching, enabling high availability, modular maintenance, and horizontal scalability. Multi-node configurations help enterprises improve application resp...

? https://www.roastdev.com/post/....oracle-e-business-su

#news #tech #development

Favicon 
www.roastdev.com

? Oracle E-Business Suite R12.2: Adding and Removing Application Tier Nodes

? Oracle E-Business Suite R12.2? Overview
Oracle E-Business Suite (EBS) Release 12.2 introduces a dual file system architecture with online patching, enabling high availability, modular maintenance, and horizontal scalability. Multi-node configurations help enterprises improve application responsiveness, ensure business continuity, and support failover strategies. The add-node and remove-node operations must be executed with strict adherence to Oracle standards to ensure service integrity and lifecycle manageability.
Key benefits include:
Consistent AutoConfig and context-driven configuration management
Shared application tier support via NFS or clustered storage
Dual file system support for run/patch file systems
Centralized and distributed load balancing
Full service enablement (OHS, WebLogic, NodeManager, Forms, Concurrent Managers)
Oracle officially documents this process via My Oracle Support (MOS) and in Rapid Clone tools.
? Step-by-Step: Adding a Secondary Node to R12.2? Step 1: Prepare files from Primary Application Tier to send to another new server node.
⛶# Start Admin Server on RUN file system
cd $ADMIN_SCRIPTS_HOME
./adadminsrvctl.sh start


# Start Admin Server on PATCH file system
cd $ADMIN_SCRIPTS_HOME
./adadminsrvctl.sh start forcepatchfs


# Execute Preclone on Primary Node (appsTier)
cd $ADMIN_SCRIPTS_HOME
perl adpreclone.pl appsTier*? Step 2: Create Tar Archive and Transfer
*
⛶# Navigate to FS1 and compress application tier
cd /u01/appltier/R12PRD/fs1/

nohup tar -czvf EBSapps$(date +%m%d%y).tar.gz EBSapps

# Secure copy tar file generated to new node
scp EBSapps$(date +%m%d%y).tar.gz user@newnode:/u01/appltier/R12PRD/fs1/*? Step 3: Unpack and Prepare the New Node
*
⛶# Uncompress the application tier structure
cd /u01/appltier/R12PRD/fs1/

nohup tar -xzvf EBSapps$(date +%m%d%y).tar.gz
Note: Ensure shared mount points, user groups, and directory structures are aligned across nodes. Examples:

/usr/tmp
/u01/oracle/ebshml/appltier/tmp/shared
/fs_shared/APPLPTMP? Step 4: Prepare Pairs File (Node-Specific)
⛶Example values for /u01/appltier/R12PRD/pairsfiles/pairsfile.txt:

[Base]
s_base=/u01/appltier/R12PRD
s_current_base=/u01/appltier/R12PRD/fs1
s_other_base=/u01/appltier/R12PRD/fs2
s_ne_base=/u01/appltier/R12PRD/fs_ne

[General]
s_applptmp=/u01/appltier/R12PRD/tmp/shared
s_appsgroup=oinstall
s_appsuser=applprd
s_dbuser=oracle
s_dbgroup=dba
s_dbdomain=your_host_domain.com
s_at=/u01/appltier/R12PRD/fs1/EBSapps/appl
s_com=/u01/appltier/R12PRD/fs1/EBSapps/comn
s_tools_oh=/u01/appltier/R12PRD/fs1/EBSapps/10.1.2
s_weboh_oh=/u01/appltier/R12PRD/fs1/FMW_Home/webtier
s_fmw_home=/u01/appltier/R12PRD/fs1/FMW_Home
s_dbGlnam=R12PRD
s_dbSid=R12PRD
s_dbhost=scprd-scan
s_clonestage=/u01/appltier/R12PRD/fs1/EBSapps/comn/clone
s_dbport=1521
s_options_symlinks=Options -FollowSymLinks
s_proxyhost=
s_proxybypassdomain=your_host_domain.com
s_proxyport=
s_nonproxyhosts=
s_javamailer_imapdomainname=NoImapDomain
s_javamailer_imaphost=NoImapHost
s_javamailer_reply_to=NoReplyTo
s_javamailer_outbound_user=changeOnJavaMailerInstall
s_smtphost=ebsprdr12
s_smtpdomainname=your_host_domain.com
s_file_edition_type=run
s_port_pool=0
s_admhost=ebsprdr12
s_atName=ebsprdr12
s_shared_file_system=true
patch_s_port_pool=1

[Web Entry Point Configuration]
s_webentryurlprotocol=http
s_webentryhost=ebsprdr12
s_webentrydomain=your_host_domain.com
s_active_webport=8000
s_endUserMonitoringURL=http://ebsprdr12.your_host_domain.com:8000/oracle_smp_chronos/oracle_smp_chronos_sdk.gif
s_external_url=http://ebsprdr12.your_host_domain.com:8000
s_login_page=http://ebsprdr12.your_host_domain.com:8000/OA_HTML/AppsLocalLogin.jsp

[Instance Specific]
s_temp=/u01/appltier/R12PRD/fs1/inst/apps/R12PRD_ebsprdweb/temp
s_contextname=R12PRD_ebsprdweb
s_hostname=ebsprdweb
s_domainname=your_host_domain.com
s_cphost=ebsprdweb
s_webhost=ebsprdweb
s_config_home=/u01/appltier/R12PRD/fs1/inst/apps/R12PRD_ebsprdweb
s_inst_base=/u01/appltier/R12PRD
s_display=ebsprdweb:0.0
s_forms-c4ws_display=ebsprdweb:0.0
s_ohs_instance=EBS_web_R12PRD_OHS2
s_webport=8000
s_http_listen_parameter=8000
s_https_listen_parameter=4443

[Services To be Enabled on the Secondary Application Tier Node]
s_web_applications_status=enabled
s_web_entry_status=enabled
s_apcstatus=enabled
s_root_status=enabled
s_batch_status=enabled
s_other_service_group_status=enabled
s_adminserverstatus=disabled
s_web_admin_status=disabled*? Step 5: Clone Secondary Node
*
⛶cd /u01/appltier/R12PRD/fs1/EBSapps/comn/clone/bin

perl adcfgclone.pl component=appsTier \
pairsfile=/u01/appltier/R12PRD/pairsfiles/pairsfile.txt \
addnode=yes dualfs=yes
This will instantiate the context, register WebLogic managed servers, and start relevant services.*?️ Removing an Application Tier Node (R12.2)
*
*? Step 1: Execute adProvisionEBS
*
⛶perl $AD_TOP/patch/115/bin/adProvisionEBS.pl \
ebs-delete-node \
-contextfile=$CONTEXT_FILE \
-hostname=rnnhml01ebs03 \
-logfile=delete.log

Optionally, to delete a specific managed server:

perl $AD_TOP/patch/115/bin/adProvisionEBS.pl \
ebs-delete-managedserver \
-contextfile=$CONTEXT_FILE \
-managedsrvname=oacore_server1 \
-logfile=/tmp/oacore_server1_delete.txt*? Best Practices Considerations
*
Always take a backup before cloning or removing nodes.
Validate context files with adchkcfg.sh post-clone.
Run AutoConfig on all nodes after changes.
Sync all tier clocks via NTP.
Monitor via EBS Cloud Control or custom dashboards.
? Thank you for reading!
I’m passionate about sharing real-world experiences in Oracle General, Oracle Cloud, Database Optimization, IT Innovation, and Enterprise Solutions.
Similar

Dynamic Radial Menu Revisit

Check out this Pen I made!

...

? https://www.roastdev.com/post/....dynamic-radial-menu-

#news #tech #development

Favicon 
www.roastdev.com

Dynamic Radial Menu Revisit

Check out this Pen I made!