Thursday, 13 June 2019

Generating Unique Id in Distributed Environment in high Scale:

Recently I was working on a project which requires unique id in a distributed environment which we used as a primary key to store in databases. In a single server, it is easy to generate unique id like Oracle uses sequence(increment counter for next id ) in SQL auto increment primary key column in tables.
In SQL we can do it while creation of the table.
CREATE TABLE example ( primary_key AUTOINCREMENT PRIMARY KEY, ...
);
In Oracle, we use sequence while inserting in table.
CREATE SEQUENCE seq_example
MINVALUE 1
START WITH 1
INCREMENT BY 1
CACHE 10;
INSERT INTO example (primary_key)
VALUES (seq_example.nextval);
In a single server, it's pretty easy to generate a primary key, In a distributed environment, it becomes a problem because key should be unique in all the nodes. Let’s see how can we do it in a distributed environment.
There are a couple of approaches which has pros and cons both so let’s go through one by one.

Database Ticket Servers:

These are the Centralized Auto increment servers which response with unique ids when requested from the nodes. The problem with these kinds of nodes is a single point of failure because all the nodes are dependent on this server if it fails then all nodes will not able to process further.

UUID:

UUIDs are 128-bit hexadecimal numbers that are globally unique. The chances of the same UUID getting generated twice is a negligible or very very less probability for collisions UUID contains a reference to the network address of the host that generated the UUID, a timestamp (a record of the precise time of a transaction), and some randomly generated component.
According to Wikipedia, regarding the probability of duplicates in random UUIDs:
Only after generating 1 billion UUIDs every second for the next 100 years, the probability of creating just one duplicate would be about 50%. Or, to put it another way, the probability of one duplicate would be about 50% if every person on earth owned 600 million UUIDs.
  • UUID’ s does not require coordination between different nodes and can be generated independently.
But the problem with UUID is very big in size and does not index well so while indexing it will take more size which effects query performance.

Twitter Snowflake:

Twitter snowflake is a dedicated network service for generating 64-bit unique IDs at high scale with some simple guarantees.
The IDs are made up of the following components:
  • Epoch timestamp in millisecond precision — 41 bits (gives us 69 years with a custom epoch)
  • Machine id — 10 bits (gives us up to 1024 machines)
  • Sequence number — 12 bits (A local counter per machine that rolls over every 4096)
  • The extra 1 bit is reserved for future purposes.
So the id which is generated by this is 64bit which solves the problems of size and latency issues but also introduces one problem for maintaining extra servers.
That’s it
Happy Learning.

Saturday, 8 June 2019

Redis Keyspace Notifications for Expired Keys:

Actually, I was trying to figuring out how to listen or subscribe on expired keys in Redis, then I came across notifications events in Redis. In this article, I will give you an overview of how to configure key space notifications events.

Enable keyspace notifications:

By default keyspace notifications are disabled in Redis due to performance impact. There are two ways we can enable this either using redis-cli or redis.conf file.

Enable Using Redis-CLI:

$ redis-cli config set notify-keyspace-events Ex
OK
Here we are only configured for expired events on keys so we used Ex
E: key events or events happening on keys
x: expired events
check for more events type these are the events which are present are copied from Redis documentation.
K     Keyspace events, published with __keyspace@<db>__ prefix.
E     Keyevent events, published with __keyevent@<db>__ prefix.
g     Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...
$     String commands
l     List commands
s     Set commands
h     Hash commands
z     Sorted set commands
x     Expired events (events generated every time a key expires)
e     Evicted events (events generated when a key is evicted for maxmemory)
A     Alias for g$lshzxe, so that the "AKE" string means all the events.

Enable Using Redis.conf:

Add line notify-keyspace-events Ex in redis.conf
See the second last line which is uncommented. We have configured this let’s see with an example.
  • Let’s set value and expire time as 5 sec for key redis first and subscriber for expired events
127.0.0.1:6379> SETEX redis  5 test 
OK
  • Now adding subscriber for expired events.
$redis-cli — csv psubscribe ‘__key*__:*’
Reading messages... (press Ctrl-C to quit)
"psubscribe","__key*__:*",1
After 5 seconds
redis-cli — csv psubscribe ‘__key*__:*’
Reading messages… (press Ctrl-C to quit)
“psubscribe”,”__key*__:*”,1
“pmessage”,”__key*__:*”,”__keyevent@0__:expired”,”redis
Yay !!! it’s working.
That’s it.

Thursday, 6 June 2019

Transactions in Redis Cluster (Multi Nodes)

Transactions in Redis Cluster (Multi Nodes)

  • Redis also supports transactions not same as SQL databases. In Redis, transactions consist of a block of commands placed between MULTI and EXEC.
  • Commands in connections are not get executed instead of this they get Queued and when EXEC get executed all changes applied to Redis.
Let’s see with some of the examples.
  • In the above example, we have started block with MULTI and then we are setting values of key tutorials as Redis. Let’s check that key is present in Redis
  • No, it's not. Now we execute command EXEC.
  • Now we check
Yay!! it's working. As we have seen the command(set tutorials Redis) which are between MULTI AND EXEC are executed in a transaction.
  • When an EXEC is encountered, they are all applied in a single unit.

Key Points of transaction in Redis:

  • All the commands in a transaction are serialized and executed sequentially
  • It also guarantees that the commands are executed as a single isolated operation.
  • Redis transactions are also atomic.
  • Redis does not support rollback. If some of the commands fail in the transaction but all other commands which are successful are applied.

Discarding the command queue:

If you want to abort the transaction don’t want to commit the transaction you can use discard this will not execute any commands and connection states will back to normal
127.0.0.1:6379> MULTI
OK
127.0.0.1:6379> SET tutorial redis
QUEUED
127.0.0.1:6379> GET tutorial
QUEUED
127.0.0.1:6379> DISCARD
127.0.0.1:6379> Keys *
(empty list or set)

Watch command for Optimistic Locking:

we can use the watch command to detect any changes done by other clients. in case any changes are done on the key for which watch command is issued then transaction gets rolled back and EXEC command returns null response.
  • It provides CAS (compare and swap ) kind of behavior.
Let’s see some example.
Clients1 executed these commands.
127.0.0.1:6379>WATCH tutorials
OK
127.0.0.1:6379>MULTI
OK
127.0.0.1:6379>SET tutorials redis-test // key get updated
QUEUED
127.0.0.1:6379>EXEC
(nil)
At the same time, Client2 updated the value before it was committed by client1 then it will get rollback.
127.0.0.1:6379>SET tutorials redis-test-client2 //key get updated
Now let's see the value of key tutorials.
127.0.0.1:6379>GET tutorials
redis-test-client2
We have seen how transactions work in Redis. Let’s discuss about the transaction in Redis Cluster.

Transactions in Redis Cluster:

If transactions are done in a standalone server or all the keys are present in the same node then it can be achieved. This option allows for fully-featured transactions. The client is required to keep track of the node the transaction is executed while the transaction is in progress. The client needs to set a transactional flag and on the very first command containing a key, the MULTI command needs to be issued, right before the first command within the transaction.
In a distributed environment when the keys which are required to get updated are present in different nodes in this case once the transaction is executed, When the key is requested from a particular node and the node is not yet part of the transaction, aMULTI command is issued to join the node to the transaction. In this case, the individual transaction does not know that the value of the keys is updated in different node because node, where the transaction is done other node is aware of this. So this cause problem in multi sharded environment.
Thanks

Monday, 27 May 2019

Cross-Origin Resource Sharing (CORS)and Preflight Request

If you are Front Developer or API Developer you came across this term many times so let’s discuss in detail what is this policy is all about. Let’s see first what is CORS.

What is CORS?

CORS that is declared by the w3c on communication between different domains its a mechanism to tell the browser to access resource cross-origin or from a different source, or its a way for the server to check if requests coming in are allowed if they’re coming from a different origin.
  • For Example, The frontend JavaScript code for a web application served from http://abc.com uses XMLHttpRequest to make a request for http://abcd.com/.
  • For security reasons, browsers are restricted cross-origin HTTP requests initiated from within scripts.
  • The CORS mechanism supports secure cross-origin requests and data transfers between browsers and web servers


  • Add HTTP Header(Access-Control-Allow-Origin) in the server side to accept requests by the specified domain or all domains or list of domains.
Access-Control-Allow-Origin: *

CORS Request Types:

As a developer, you need to worry about this when you are constructing requests to be sent to a server but you see in the network log of the browser you will find request and it has performance impact as well. There are two types of request simple and preflight.

Simple Request:

These types of request simple exchange of CORS headers between client and server to check the permissions. To request comes under this category it has to follow the below criteria.

Allowed methods:

GET
HEAD
POST

Allowed Headers:

Accept
Accept-Language
Content-Language
Content-Type (but note the additional requirements below)
Last-Event-ID
DPR
Save-Data
Viewport-Width
Width

Allowed Content-Type

application/x-www-form-urlencoded
multipart/form-data
text/plain
  • No event listeners are registered on any XMLHttpRequestUpload object used in the request; these are accessed using the XMLHttpRequest.uploadproperty.
  • No ReadableStream object is used in the request.

Preflight Request:

If the request does not follow the above criteria then it comes under preflight. The browser automatically sends an HTTP request before the original one by OPTIONSmethod to check whether it is safe to send the original request. If the server specifies that the original request is safe, it will allow the original request. Otherwise, it will block the original request.
  • It is an OPTIONS request, using three HTTP request headers:
Access-Control-Request-Method
Access-Control-Request-Headers
Origin header


Let’s see with the example
Client asking from a server if it would allow a PUT request, before sending a PUT request, by using a preflight request:
OPTIONS /api/ 
Access-Control-Request-Method: PUT 
Access-Control-Request-Headers: origin, x-requested-with
Origin: https://api.com
If the server allows it, then it will respond to the preflight request with an Access-Control-Allow-Methods response header
HTTP/1.1 204 No Content
Connection: keep-alive
Access-Control-Allow-Origin: https://api.com
Access-Control-Allow-Methods: POST, GET, OPTIONS, DELETE
Access-Control-Max-Age: 86400
  • Access-Control-Allow-Origin: The origin that is allowed to make the request, or * if a request can be made from any origin
  • Access-Control-Allow-Methods: A comma-separated list of HTTP methods that are allowed
  • Access-Control-Allow-Headers: A comma-separated list of the custom headers that are allowed to be sent
  • Access-Control-Max-Age: The maximum duration that the response to the preflight request can be cached before another call is made
That’s it
Thanks

Thursday, 16 May 2019

/etc/skel directory in Linux

  • skel is derived from the skeleton because it contains basic structure of home directory
  • The /etc/skel directory contains files and directories that are automatically copied over to a new user’s when it is created from useradd command.
  • This will ensure that all the users gets same intial settings and environment.
ls -la /etc/skel/
total 24
drwxr-xr-x.  2 root root   62 Apr 11  2018 .
drwxr-xr-x. 77 root root 2880 Mar 28 03:38 ..
-rw-r--r--.  1 root root   18 May 30 17:07 .bash_logout
-rw-r--r--.  1 root root  193 May 30 17:07 .bash_profile
-rw-r--r--.  1 root root  231 May 30 17:07 .bashrc
  • The location of /etc/skel can be changed by editing the line that begins with SKEL= in the configuration file /etc/default/useradd. By default this line says SKEL=/etc/skel.
cat /etc/default/useradd
# useradd defaults file
GROUP=100
HOME=/home
INACTIVE=-1
EXPIRE=
SHELL=/bin/bash
SKEL=/etc/skel
CREATE_MAIL_SPOOL=yes
  • Default Permission of the /etc/skel directory is drwxr-xr-x.
  • It is not recommended to change the permission of skel directory or its contents. skel directory there are some profiles that needs the permission of read and trying to give it permission of execute will cause some programs/profiles to stop work or not works as expected.

Change TimeZone in linux

Standard and precise timezone is crucial for the evaluation and execution of many tasks and processes running on a Linux instance. we come across certain circumstances where we need of changing and setting up the different timezone on the Linux system.
Let’s see how can we do it.

Check Current TimeZone :

We can do using date command
date
Thu May 16 10:35:11 IST 2019


or
using timedatectl command
timedatectl 
 Local time: Thu 2019–05–16 23:05:55 IST
 Universal time: Thu 2019–05–16 17:35:55 UTC
 RTC time: Thu 2019–05–16 17:35:55
 Time zone: Asia/Kolkata (IST, +0530)
 System clock synchronized: yes
 systemd-timesyncd.service active: yes
 RTC in local TZ: no


How to change:

  • All the time zones are located under /usr/share/zoneinfo directory


  • Now create a link the timezone file from the above directory to the /etc/localtime directory
ln -s /usr/share/zoneinfo/US/CET /etc/localtime
  • In some of the distributions, the timezone is controlled by /etc/timezonefile.
cat /etc/timezone 
Asia/Kolkata
  • To change this to Australia time (Brisbane), modify the /etc/timezone file as shown below.
# vim /etc/timezone
America/Brisbane
That’s it.
Happy Learning.

Generating Unique Id in Distributed Environment in high Scale:

Recently I was working on a project which requires unique id in a distributed environment which we used as a  primary  key to store in dat...