<![CDATA[Anantha Raju C]]>http://localhost:2368/http://localhost:2368/favicon.pngAnantha Raju Chttp://localhost:2368/Ghost 5.8Mon, 25 Dec 2023 13:20:13 GMT60<![CDATA[dbt-core, ClickHouse and Dagster]]>http://localhost:2368/dbt-core/658933857bbe782450364b75Mon, 25 Dec 2023 12:22:54 GMTdbt-core, ClickHouse and Dagster

This post briefly captures the usage of dbt-core and it's integration with Dagster.

Introduction

dbt Core is an open source command line tool that enables data teams to transform data using analytics engineering best practices.

dagster is an orchestration platform for the development, production, and observation of data assets.

ClickHouse is an open-source column-oriented database management system that allows generating analytical data reports in real-time using SQL queries.

Minimum Software Requirements

Installations

Install the dbt ClickHouse plugin.

pip install dbt-clickhouse

Install the dagster-dbt library.

pip install dagster-dbt dagster-webserver

Getting Started

These instructions will get you a copy of the project up and running on your local machine for development and testing purposes.

Initialize the dbt project.

dbt init dbt_data_practitioner 

dbt-core, ClickHouse and Dagster

cd dbt_data_practitioner
touch profiles.yml
dbt_data_practitioner:
  target: dev
  outputs:
    dev:
      type: clickhouse
      schema: sakila_db
      host: localhost
      port: 8123
      user: default
      password: root
      secure: False
dbt debug

dbt-core, ClickHouse and Dagster

dbt docs

dbt docs generate

dbt-core, ClickHouse and Dagster

dbt docs serve

dbt-core, ClickHouse and Dagster

dbt-core, ClickHouse and Dagster

models

cd models  
mkdir sakila_db  
cd sakila_db  
touch actor_film_actor_join.sql
touch point_of_interest_1.sql 

Delete the examples folder present inside the models folder.

cd ..
cd .. 

dbt build

dbt build

dbt-core, ClickHouse and Dagster

The tables and views defined are now generated in ClickHouse DB.

dbt docs generate
dbt docs serve

Lineage Graph and other details.

dbt-core, ClickHouse and Dagster

Dagster

cd dbt_data_practitioner
dagster-dbt project scaffold --project-name dagster_data_practitioner

dbt-core, ClickHouse and Dagster

cd dagster_data_practitioner
DAGSTER_DBT_PARSE_PROJECT_ON_LOAD=1 dagster dev

dbt-core, ClickHouse and Dagster

To access from your browser, navigate to: http://127.0.0.1:3000

Dagster UI

dbt-core, ClickHouse and Dagster

Click on the black "Materialize all" button.

dbt-core, ClickHouse and Dagster

dbt-core, ClickHouse and Dagster

Sample Project

DataPractitioner is the sample project i've used to illustrate the usage of the aforementioned tools.

Noticed an issue with this Sample Project? Open an issue or a PR on GitHub!

]]>
<![CDATA[Replicate MySQL database in ClickHouse using [experimental] MaterializedMySQL Database Engine]]>

This post briefly documents the process of replicating database in MySQL into ClickHouse.

MySQL is an open-source relational database management system.

ClickHouse is a fast open-source column-oriented database management system that allows generating analytical data reports in real-time using SQL queries.

Minimum Software Requirements

mysql.cnf

]]>
http://localhost:2368/replicate-mysql-database-in-clickhouse-using/65891ce27bbe782450364a54Mon, 25 Dec 2023 07:34:10 GMTReplicate MySQL database in ClickHouse using [experimental] MaterializedMySQL Database Engine

This post briefly documents the process of replicating database in MySQL into ClickHouse.

MySQL is an open-source relational database management system.

ClickHouse is a fast open-source column-oriented database management system that allows generating analytical data reports in real-time using SQL queries.

Minimum Software Requirements

mysql.cnf MySQL server configuration

The following are mandatory MySQL server configuration which must be set.

Stop MySQL service.

systemctl status mysql
systemctl stop mysql.service

Update the configuration.

sudo nano /etc/mysql/mysql.conf.d/mysqld.cnf
[mysqld]
server-id                     = 223344
log_bin                       = mysql-bin
expire_logs_days              = 1
binlog_format                 = row
binlog_row_image              = FULL
default_authentication_plugin = mysql_native_password
gtid_mode                     = on
enforce_gtid_consistency      = on

ClickHouse reads binlog and performs DDL and DML queries.

Start MySQL service.

systemctl start mysql.service

Reference: https://clickhouse.com/docs/en/engines/database-engines/materialized-mysql

MySQL user creation for replication

create user clickhouse_replication@'localhost' identified with mysql_native_password by 'ChRep$316';

grant replication slave, replication client, reload, select on *.* to clickhouse_replication@'localhost';

flush privileges;

Sample MySQL Data Setup

Create a table.

CREATE TABLE `user` (
  `id` bigint(20) NOT NULL AUTO_INCREMENT,
  `created_date` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
  `last_modified_date` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
  `username` varchar(255) DEFAULT NULL,
  `email` varchar(255) DEFAULT NULL,
  PRIMARY KEY (`id`),
  UNIQUE KEY `UK_sb8bbouer5wak8vyiiy4pf2bx` (`username`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;

Insert sample data.

INSERT INTO `user`(`username`,`email`)VALUES('John Doe','johndoe@example.com');

INSERT INTO `user`(`username`,`email`)VALUES('Jane Doe','janedoe@example.com');

CLickHouse server configuration

sudo systemctl status clickhouse-server
sudo systemctl stop clickhouse-server
sudo nano /etc/clickhouse-server/users.xml

Add/Update the following value.

    <profiles>
        <default>                           <allow_experimental_database_materialized_mysql>1</allow_experimental_database_materialized_mysql>
        </default>
    </profiles>
sudo systemctl start clickhouse-server

Login to ClickHouse server using terminal or any GUI tool and verify that the changes have been saved.

clickhouse-client --password   
SELECT
    name,
    value,
    changed,
    description
FROM system.settings
WHERE name = 'allow_experimental_database_materialized_mysql'

Replicate MySQL database in ClickHouse using [experimental] MaterializedMySQL Database Engine

Activating the replication in ClickHouse

CREATE DATABASE mysqlCH
ENGINE = MaterializeMySQL('127.0.0.1', 'ch', 'clickhouse_replication', 'ChRep$316')
SETTINGS allows_query_when_mysql_lost = 1, max_wait_time_when_mysql_unavailable = 10000, materialized_mysql_tables_list = 'user'

Replicate MySQL database in ClickHouse using [experimental] MaterializedMySQL Database Engine

Now, check if the database and the table with data got replicated or not.

select * from mysqlCH.user;

Replicate MySQL database in ClickHouse using [experimental] MaterializedMySQL Database Engine

INSERT additional rows in MySQL table and check the replication status.

INSERT INTO `user`(`username`,`email`)VALUES('Alice','alice@example.com');
INSERT INTO `user`(`username`,`email`)VALUES('Bob','alice@example.com');

Replicate MySQL database in ClickHouse using [experimental] MaterializedMySQL Database Engine

UPDATE Alice's email in MySQL user table.

UPDATE `ch`.`user` SET `email` = 'alice@domain.com' WHERE `username` = 'Alice';

Replicate MySQL database in ClickHouse using [experimental] MaterializedMySQL Database Engine

DELETE Bob in MySQL user table.

DELETE FROM `ch`.`user` WHERE (`id` = '4');

Replicate MySQL database in ClickHouse using [experimental] MaterializedMySQL Database Engine

]]>
<![CDATA[ClickHouse]]>

This post briefly documents the process of using ClickHouse.

Introduction

ClickHouse is a fast open-source column-oriented database management system that allows generating analytical data reports in real-time using SQL queries.

Minimum Software Requirements

Download the software

Running the application

Accessing the client

Resources

New York Public Library "What&

]]>
http://localhost:2368/clickhouse/655223945650dc3b96a57209Mon, 13 Nov 2023 13:59:16 GMTClickHouse

This post briefly documents the process of using ClickHouse.

Introduction

ClickHouse is a fast open-source column-oriented database management system that allows generating analytical data reports in real-time using SQL queries.

Minimum Software Requirements

Download the software

ClickHouse

Running the application

ClickHouse

Accessing the client

ClickHouse
ClickHouse

Resources

New York Public Library "What's on the Menu?" Dataset an example of denormalizing data.

]]>
<![CDATA[Airbyte]]>

This post briefly documents the process of using Airbyte.

Introduction

Airbyte is an open-source data integration engine that helps you consolidate your data in your data warehouses, lakes and databases.

Minimum Software Requirements

Running the application

Accessing the UI

http://localhost:8000/

username airbyte
password password

note: credentials can

]]>
http://localhost:2368/airbyte/65521b155650dc3b96a571c7Mon, 13 Nov 2023 13:23:17 GMTAirbyte

This post briefly documents the process of using Airbyte.

Introduction

Airbyte is an open-source data integration engine that helps you consolidate your data in your data warehouses, lakes and databases.

Minimum Software Requirements

Running the application

Airbyte
Airbyte

Accessing the UI

http://localhost:8000/

username airbyte
password password

note: credentials can be changed in the .env file.

Airbyte
Airbyte

Source Configuration

Airbyte
]]>
<![CDATA[Data]]>
  • Batch Processing
  • Real-Time Processing

  • OLTP
  • OLAP
  • Data Warehouse

  • Data Mesh
  • ClickHouse
  • Apache Kafka

Databases

  • MySQL

Data Visualization

Apache Superset: modern data exploration and data visualization platform.

Advantages
  • No-code interface for building charts quickly.
  • Large Gallery of Visualizations.
  • Replace or augment proprietary business intelligence tools.

ETL

  • Airbyte

Work

  • Data Engineer: Develop
]]>
http://localhost:2368/data/6550e1b4900efb38e7ae1d8aSun, 12 Nov 2023 17:38:17 GMT
  • Batch Processing
  • Real-Time Processing

  • OLTP
  • OLAP
  • Data Warehouse

  • Data Mesh
  • ClickHouse
  • Apache Kafka

Databases

  • MySQL

Data Visualization

Data

Apache Superset: modern data exploration and data visualization platform.

Advantages
  • No-code interface for building charts quickly.
  • Large Gallery of Visualizations.
  • Replace or augment proprietary business intelligence tools.

ETL

  • Airbyte

Work

  • Data Engineer: Develop & optimize data systems and make data available for analysis.
    • Design, Maintain & Optimize Data Systems.
    • Data Pipelines: Transform & move data between data systems. Seamless data flow.
    • Data high availablity, consistency, Security, Recoverability.
    • Acquire datasets that align with business needs.

  • Data Analyst: Analyze data in data systems to report & derive insights.
    • Reports

  • Data Scientist: Perform deeper analysis on data. Develop predective models t osolve more complex data problems.
    • Statistical Analysis

  • Data Practitioner: Supports the data life cycle by collecting, transforming, and analyzing data, and communicating results in order to inform and guide decision-making.

Resources

]]>
<![CDATA[Metabase]]>

Introduction

This example provides a base setup for using Metabase.

Minimum Software Requirements

Getting Started

Setup

Running the application via Command Line

java -jar metabase.jar

Accessing Metabase

http://localhost:3000/

]]>
http://localhost:2368/metabase/6550bc7f900efb38e7ae1d1fSun, 12 Nov 2023 12:11:20 GMT

Introduction

Metabase

This example provides a base setup for using Metabase.

Minimum Software Requirements

Getting Started

Setup

Running the application via Command Line

java -jar metabase.jar
Metabase

Accessing Metabase

http://localhost:3000/

Metabase
Metabase
Metabase
Metabase
Metabase
Metabase
Metabase
]]>
<![CDATA[Exploratory data analysis with sweetviz]]>

"Exploratory Data Analysis (EDA) is an analysis approach that identifies general patterns in the data. These patterns include outliers and features of the data that might be unexpected. EDA is an important first step in any data analysis." 1

"Sweetviz is an open-source Python library that generates

]]>
http://localhost:2368/exploratory-data-analysis-with/65450e67b9d645b14615fe98Fri, 03 Nov 2023 15:38:01 GMTExploratory data analysis with sweetviz

"Exploratory Data Analysis (EDA) is an analysis approach that identifies general patterns in the data. These patterns include outliers and features of the data that might be unexpected. EDA is an important first step in any data analysis." 1

"Sweetviz is an open-source Python library that generates beautiful, high-density visualizations to kickstart EDA (Exploratory Data Analysis) with just two lines of code. Output is a fully self-contained HTML application."

Exploratory data analysis with sweetviz

Installation

pip install sweetviz

Exploratory data analysis with sweetviz

Basic Program

Exploratory data analysis with sweetviz
Exploratory data analysis with sweetviz

Once the program is executed, the browser navigates to a self contained html report

Exploratory data analysis with sweetviz

Resources

https://github.com/datasciencedojo/datasets/blob/master/titanic.csv

https://www.knowledgehut.com/blog/programming/run-python-scripts#how-to-run-python-script-by-the-interpreter

]]>
<![CDATA[Ubuntu Post Install Script]]>

Introduction

Post-installation Shell Script for Ubuntu.

A shell script is a computer program designed to be run by a Unix shell, a command-line interpreter.

Minimum Software Requirements

Sample Project

Ubuntu Post Install Scripts

Basic Usage

  • Open a terminal window on Ubuntu by pressing either of the following key combinations

]]>
http://localhost:2368/ubuntu-post-install-script/638af8c4bbbc8447f5ca93edSat, 03 Dec 2022 07:30:42 GMT

Introduction

Ubuntu Post Install Script

Post-installation Shell Script for Ubuntu.

A shell script is a computer program designed to be run by a Unix shell, a command-line interpreter.

Minimum Software Requirements

Sample Project

Ubuntu Post Install Scripts

Basic Usage

  • Open a terminal window on Ubuntu by pressing either of the following key combinations on the keyboard.

    • Ctrl + Alt + T
    • Ctrl + Shift + T
  • Update the permissions of the script with the chmod command.

    • sudo chmod +x ubuntu-post-install.sh
  • Run the script with the bash command.

    • sudo bash ubuntu-post-install.sh

These packages are preferential and you should update them with the ones you prefer.

https://github.com/AnanthaRajuC/ubuntu-post-install/blob/main/ubuntu-post-install.sh

Ubuntu Post Install Script

]]>
<![CDATA[GNOME Shell Extensions]]>

Introduction

GNOME Shell extensions allow for the customization of the default GNOME Shell interface and its parts, such as window management and application launching.

Extensions

ArcMenu
Application menu for GNOME Shell.

Control Blur Effect On Lock Screen
Control the Blur Effect On Lock Screen.

Dash to Panel
An icon taskbar

]]>
http://localhost:2368/gnome-shell-extensions/6382ec72f99fa23f8440d39fSun, 27 Nov 2022 05:25:15 GMT

Introduction

GNOME Shell Extensions

GNOME Shell extensions allow for the customization of the default GNOME Shell interface and its parts, such as window management and application launching.

Extensions

ArcMenu
Application menu for GNOME Shell.

GNOME Shell Extensions

Control Blur Effect On Lock Screen
Control the Blur Effect On Lock Screen.

GNOME Shell Extensions

Dash to Panel
An icon taskbar for the Gnome Shell. This extension moves the dash into the gnome main panel so that the application launchers and system tray are combined into a single panel.

GNOME Shell Extensions

Google Earth Wallpaper
Sets your wallpaper to a random photo from the curated Google Earth collection.

GNOME Shell Extensions

OpenWeather
Display weather information for any location on Earth in the GNOME Shell.

GNOME Shell Extensions

Lock Keys
Numlock & Capslock status on the panel.

GNOME Shell Extensions

Desktop Icons
Add icons to the desktop.

GNOME Shell Extensions
]]>
<![CDATA[Apache Superset]]>

Introduction

This example provides a base setup for using Apache Superset (a modern data exploration and visualization platform)

"Data and information visualization is an interdisciplinary field that deals with the graphic representation of data and information. It is a particularly efficient way of communicating when the data or information

]]>
http://localhost:2368/apache-superset/6370faedacb23e2fd22ca182Sun, 13 Nov 2022 14:16:20 GMT

Introduction

Apache Superset

This example provides a base setup for using Apache Superset (a modern data exploration and visualization platform)

"Data and information visualization is an interdisciplinary field that deals with the graphic representation of data and information. It is a particularly efficient way of communicating when the data or information is numerous as for example a time series."

Minimum Software Requirements

Getting Started

Setup

docker version
docker-compose version
Apache Superset

Database Setup

In this post we will use docker to deploy Apache Superset, in order to allow Apache Superset to connect to locally installed MySQL database we will have to perform the following operations.

Enable MySQL to be able to listen for an external IP address where the server can be reached.

sudo nano /etc/mysql/mysql.conf.d/mysqld.cnf

update bind-address directive to a wildcard IP address, either *****, ::, or 0.0.0.0 to reference an external IP address.

By default, bind-address is set to 127.0.0.1, meaning that the server will only look for local connections.

Reference: https://www.digitalocean.com/community/tutorials/how-to-allow-remote-access-to-mysql


MySQL User for Apache Superset

CREATE USER 'myuser'@'localhost' IDENTIFIED BY 'mypass';
CREATE USER 'myuser'@'%' IDENTIFIED BY 'mypass';
GRANT ALL ON *.* TO 'myuser'@'localhost';
GRANT ALL ON *.* TO 'myuser'@'%';
FLUSH PRIVILEGES;

Reference: https://stackoverflow.com/a/55742963

Bringing up Apache Superset

Updating docker-compose-non-dev.yml to connect to localhost.

git clone https://github.com/apache/superset.git
cd superset/
docker-compose -f docker-compose-non-dev.yml pull
docker-compose -f docker-compose-non-dev.yml up

Reference: https://superset.apache.org/docs/installation/installing-superset-using-docker-compose

Apache Superset

Accessing Superset GUI

localhost:8088/login/

Apache Superset
Apache Superset
Apache Superset

Connect to Database

Host: 172.17.0.1
Port: 3306
Database Name: mysqmpledb
Username: root
Password: root
Display Name: MySQL - LocalHost

Apache Superset
]]>
<![CDATA[IntelliJ IDEA Plugins]]>

Introduction

IntelliJ IDEA: an integrated development environment written in Java for developing computer software written in Java, Kotlin, Groovy, and other JVM-based languages. It is developed by JetBrains and is available as an Apache 2 Licensed community edition, and in a proprietary commercial edition.

Plugins are software extensions that can

]]>
http://localhost:2368/intellij-idea-plugins/6365d9f64c37e02ad7a7d829Sat, 05 Nov 2022 03:38:42 GMT

Introduction

IntelliJ IDEA Plugins

IntelliJ IDEA: an integrated development environment written in Java for developing computer software written in Java, Kotlin, Groovy, and other JVM-based languages. It is developed by JetBrains and is available as an Apache 2 Licensed community edition, and in a proprietary commercial edition.

Plugins are software extensions that can be loaded on a program to improve its functionality.

Minimum Software Requirements

Plugins

Rainbow CSV
Plugin for highlighting CSV files in different colors.

SonarLint
A free IDE extension to find and fix bugs, vulnerabilities and code smells as you write code! Like a spell checker, SonarLint highlights issues on the fly and provides quick fixes or clear remediation guidance to help you clean the code before it is even committed.

RoboPOJOGenerator
Intellij Idea and Android Studio plugin for JSON to POJO transformation.

intellij-generateAllSetMethod
Intellij plugin to generate call to setter method value for class.

Key Promoter X
Learn essential shortcuts while you are working. When you use the mouse on a button inside the IDE, the Key Promoter X shows you the keyboard shortcut that you should have used instead. This provides an easy way to learn how to replace tedious mouse work with keyboard keys and helps to transition to a faster, mouse free development.

Maven Helper
Analyze and exclude conflicting dependencies and more.

]]>
<![CDATA[ksqlDB and ksql-cli]]>

Introduction

This example provides a base setup for using ksqlDB (The database purpose-built for stream processing applications) and ksql-cli.

Minimum Software Requirements

Getting Started

Setup

Running the application via docker compose

 ksqldb-server:
    image: confluentinc/ksqldb-server:latest
    container_name: ksqldb-server
    hostname: ksqldb-server
    depends_on: [kafka]
    ports:
]]>
http://localhost:2368/ksqldb/636517ee2ba729dae794e42fFri, 04 Nov 2022 14:10:01 GMT

Introduction

ksqlDB and ksql-cli

This example provides a base setup for using ksqlDB (The database purpose-built for stream processing applications) and ksql-cli.

Minimum Software Requirements

Getting Started

Setup

Running the application via docker compose

 ksqldb-server:
    image: confluentinc/ksqldb-server:latest
    container_name: ksqldb-server
    hostname: ksqldb-server
    depends_on: [kafka]
    ports:
      - 8088:8088
    networks:
      - webproxy
    environment:
      KSQL_LISTENERS: http://0.0.0.0:8088
      KSQL_BOOTSTRAP_SERVERS: kafka:9092
      KSQL_KSQL_LOGGING_PROCESSING_STREAM_AUTO_CREATE: "true"
      KSQL_KSQL_LOGGING_PROCESSING_TOPIC_AUTO_CREATE: "true"
      KSQL_KSQL_CONNECT_URL: http://kafka-connect-01:8084
      KSQL_KSQL_SCHEMA_REGISTRY_URL: http://schema-registry:8081

  ksqldb-cli:
    image: confluentinc/ksqldb-cli:latest
    container_name: ksqldb-cli
    networks:
      - webproxy
    depends_on: [kafka, ksqldb-server]
    entrypoint: /bin/sh
    tty: true
    environment:
      KSQL_KSQL_CONNECT_URL: http://kafka-connect-01:8084

Pull all required docker images

$ docker-compose pull

Start up the environment

The first time that you do this, the Docker images will be pulled down from the remote server. This may take a while!

$ docker-compose up
Creating ksqldb-server      ... done
Creating ksqldb-cli         ... done

Accessing ksqlDb via ksqldb-cli

$ docker exec -it ksqldb-cli ksql http://ksqldb-server:8088

ksqlDB and ksql-cli

Check topics, streams and tables

show topics;
show streams;
show tables;

Streams

  • Declare Streams
SET 'auto.offset.reset' = 'earliest';
CREATE STREAM PERSON_STREAM (id bigint,uuid VARCHAR,created_date_time TIMESTAMP,last_modified_date_time TIMESTAMP,name VARCHAR,username VARCHAR,address_id bigint) WITH (KAFKA_TOPIC='mysql.streaming_etl_db.person',VALUE_FORMAT='JSON');
CREATE STREAM ADDRESS_STREAM (id bigint,uuid VARCHAR,created_date_time TIMESTAMP,last_modified_date_time TIMESTAMP,city VARCHAR,street VARCHAR,suite VARCHAR,zipcode VARCHAR,geo_id bigint) WITH (KAFKA_TOPIC='mysql.streaming_etl_db.address',VALUE_FORMAT='JSON');
  • Queries
DESCRIBE PERSON_STREAM;
select * from PERSON_STREAM;
SELECT * FROM PERSON_STREAM EMIT CHANGES LIMIT 1;
+-------------------------+-------------------------+-------------------------+-------------------------+-------------------------+-------------------------+-------------------------+
|ID                       |UUID                     |CREATED_DATE_TIME        |LAST_MODIFIED_DATE_TIME  |NAME                     |USERNAME                 |ADDRESS_ID               |
+-------------------------+-------------------------+-------------------------+-------------------------+-------------------------+-------------------------+-------------------------+
|1                        |ce8d2120-1f93-11ed-8647-0|2022-08-19T13:22:00.000  |2022-08-19T13:22:00.000  |d14                      |dbz14                    |1                        |
|                         |c9a3cfadc50              |                         |                         |                         |                         |                         |
Limit Reached
Query terminated
  • stream-stream join
CREATE STREAM PERSON_ADDRESS_ENRICHED_STREAM WITH (FORMAT='JSON', KAFKA_TOPIC='person_address_enriched', PARTITIONS=1, REPLICAS=1) AS 
SELECT
  P.ID P_ID,
  A.ID A_ID,
  P.NAME NAME,
  A.CITY CITY
FROM PERSON_STREAM P
LEFT OUTER JOIN ADDRESS_STREAM A WITHIN 1 HOURS GRACE PERIOD 30 MINUTES ON ((A.ID = P.ADDRESS_ID))
EMIT CHANGES;

Tables

  • Declare Tables
CREATE TABLE PERSON (id bigint PRIMARY KEY,uuid VARCHAR,created_date_time TIMESTAMP,last_modified_date_time TIMESTAMP,name VARCHAR,username VARCHAR,address_id bigint) WITH (KAFKA_TOPIC='mysql.streaming_etl_db.person',VALUE_FORMAT='JSON');
CREATE TABLE ADDRESS (id bigint PRIMARY KEY,uuid VARCHAR,created_date_time TIMESTAMP,last_modified_date_time TIMESTAMP,city VARCHAR,street VARCHAR,suite VARCHAR,zipcode VARCHAR,geo_id bigint) WITH (KAFKA_TOPIC='mysql.streaming_etl_db.address',VALUE_FORMAT='JSON');
SELECT * FROM PERSON EMIT CHANGES LIMIT 1;
SELECT * FROM ADDRESS EMIT CHANGES LIMIT 1;
  • Table Joins
SELECT 
	P.NAME,
	A.CITY
FROM PERSON P
LEFT JOIN ADDRESS A on A.id = P.address_id
EMIT CHANGES 
LIMIT 1;
SELECT 
  P.NAME, 
  A.CITY
FROM PERSON P
INNER JOIN ADDRESS A
ON A.id = P.address_id
EMIT CHANGES
LIMIT 1;
CREATE TABLE PERSON_ADDRESS_ENRICHED (P_ID bigint,A_ID bigint,NAME VARCHAR,CITY VARCHAR) WITH (KAFKA_TOPIC='person_address_enriched',VALUE_FORMAT='JSON');
  • Others
DROP TABLE IF EXISTS PERSON;

Tear down the stack

$ docker compose-down
Stopping ksqldb-cli       ... done
Stopping ksqldb-server    ... done
Removing ksqldb-cli         ... done
Removing ksqldb-server      ... done

If you want to preserve the state of all containers, run docker-compose stop instead.

]]>
<![CDATA[Learning & Reference Resources]]>

Various

Semantic Versioning 2.0.0
Versioning Guidelines.

Java Design Patterns
Collection of several design patterns implemented in Java.

Common Words
Visualization of common words in different programming languages.

Web

REST API Tutorial
REST API Tutorial.

HTTP Status Codes
HTTP Status Codes.

]]>
http://localhost:2368/learning-resources/635f3e8dd1f3be2eba75eb36Mon, 31 Oct 2022 03:26:15 GMT

Various

Learning & Reference Resources

Semantic Versioning 2.0.0
Versioning Guidelines.

Java Design Patterns
Collection of several design patterns implemented in Java.

Common Words
Visualization of common words in different programming languages.

Web

REST API Tutorial
REST API Tutorial.

HTTP Status Codes
HTTP Status Codes.

]]>
<![CDATA[Online Coding Environments]]>

IDE's (Online)

Try It Online
Online compiler for several practical and recreational programming languages.

Ideone
An online compiler and debugging tool which allows you to compile source code and execute it online in more than 60 programming languages.

Codenvy
Self-service Eclipse Che workspaces.

Coding Ground - TutorialsPoint
Code

]]>
http://localhost:2368/online-coding-environments/635e898d7a27371ee7afc411Sun, 30 Oct 2022 14:39:39 GMT

IDE's (Online)

Online Coding Environments

Try It Online
Online compiler for several practical and recreational programming languages.

Ideone
An online compiler and debugging tool which allows you to compile source code and execute it online in more than 60 programming languages.

Codenvy
Self-service Eclipse Che workspaces.

Coding Ground - TutorialsPoint
Code in almost all popular languages using Coding Ground. Edit, compile, execute and share your projects, 100% cloud.

Fiddles

PhpFiddle
PhpFiddle provides Web IDE, Web Tools, APIs, Widgets, PHP Libraries, programming resources and execution environment for PHP/MySQL/SQLite, HTML/CSS and JavaScript coding online. PhpFiddle allows people to test PHP code snippets, create, execute and download web applications, for example, Zend Framework 2 projects.

.NET Fiddle
NET sandbox for developers to quickly try out code and share code snippets.

JSFiddle
Test your JavaScript, CSS, HTML or CoffeeScript online with JSFiddle code editor.

SQL Fiddle
A tool for easy online testing and sharing of database problems and their solutions.

Python Fiddle
The Python IDE for the web. Play around with and modify live example code. Share or demonstrate solutions to problems.

R-Fiddle
share code snippets or makie your own variations on others people code.

Go Playground
Nice an clean environment for trying out Go, the statically typed language developed originally at Google.

]]>
<![CDATA[API Tools]]>

API's and Web Services

down for everyone or just me
Check if a website id down just for you or everyone.

Uptime
SLA & Uptime Calculator.

Apigee API Console
Discover, Learn, Test, and Debug any API with interactive developer tools and documentation.

PostBin
Programatically Test your API Clients

]]>
http://localhost:2368/api-tools/635e6d0f8ea41c669967e0a5Sun, 30 Oct 2022 12:25:54 GMT

API's and Web Services

API Tools

down for everyone or just me
Check if a website id down just for you or everyone.

Uptime
SLA & Uptime Calculator.

Apigee API Console
Discover, Learn, Test, and Debug any API with interactive developer tools and documentation.

PostBin
Programatically Test your API Clients or Webhooks.

httpbin
HTTP Request & Response Service (Client Testing).

RequestBin
gives you a URL that will collect requests made to it and let you inspect them in a human-friendly way. Use RequestBin to see what your HTTP client is sending or to inspect and debug webhook requests.

Mockbin
Mockbin allows you to generate custom endpoints to test, mock, and track HTTP requests & responses between libraries, sockets and APIs.

Statuspage
Status and incident communication tool.

Rapid API
RapidAPI is the world’s largest API marketplace.

APIembed
Embeddable API Code snippets - Auto-generated code snippets in many programming languages for your website, blog or API documentation.

Loader
Free load testing service that allows you to stress test your web-apps & apis with thousands of concurrent connections.

REQ RES
A hosted REST-API ready to respond to your AJAX requests.

]]>