Tuesday, February 14, 2017

Kamailio limit CPS/CPM/Concurrent calls per gateway.

Small task to make some generic limiters for calls per gateway, which in my case - destination.
Limiting concurrent calls is based on dialog module, limiting CPS/CPM - on htable module, actual solution taken from here.

On a modparam I'm pointing only on settings, that needed for this solution, so, if you want to adopt it to your case - use your settings along with provided.

loadmodule "htable.so"
...
modparam("htable", "htable", "rhs=>size=32;initval=0;autoexpire=300;")
modparam("htable", "htable", "rhm=>size=32;initval=0;autoexpire=1800;")

loadmodule "dialog.so"
...
modparam("dialog", "profiles_with_value", "concurrent_calls")

...

request_route {
...
   if (is_method("INVITE")) {
      route(LIMIT_CALLS)
   }
...
}

route[LIMIT_CALLS] {
    # Limit oncurrent calls per gateway
    if (!dlg_isflagset("1")) {
        if (get_profile_size("concurrent_calls", "$td", "$avp(calls)")) {
            $avp(concCallLimit) = <ACTUAL_CONCURRENT_CALLS_LIMIT_HERE>;
            if ($avp(calls) >= $avp(concCallLimit)) {
                xlog("L_INFO", "Concurrent calls to $td limit reached");
                send_reply("503", "Calls limit reached");
                exit;
            } else {
                dlg_manage();
                dlg_setflag("1");
                set_dlg_profile("concurrent_calls","$td");
            }
        }
    }

    $avp(rateHashSec) = "$td:sec:"+$timef(%Y/%m/%d_%H_%M_%S);
    $avp(rateHashMin) = "$td:min:"+$timef(%Y/%m/%d_%H_%M_00);

    $avp(ratePerSec) = $shtinc(rhs=>$avp(rateHashSec));
    $avp(ratePerMin) = $shtinc(rhm=>$avp(rateHashMin));
    $avp(limitPerSec) = <ACTUAL_LIMIT_PER_SECOND_HERE>;
    $avp(limitPerMin) = <ACTUAL_LIMIT_PER_MINUTE_HERE>;
    if ($avp(ratePerSec) > $avp(limitPerSec) || $avp(ratePerMin) > $avp(limitPerMin)) {
        xlog("L_INFO", "CPS/CPM Limit on $td");
        send_reply("503", "CPS/CPM Limit on $td");
        exit;
    }
}

Main idea in concurrent calls - use dialog profiles and ability to count them. Point, it can be used only in stateful mode, so dialogs are tracked and deleted after it's end. Flag "1" is used not to count same dialog several times.

Idea in CPS/CPM limiter - using htable with keys, that can be accessible only in specific time due name formation rules.

And $td is used to limit to current destination, that was set up previously (no touching this part)
This one is not very best for copy-paste, so, adopt it to your needs :)

Saturday, February 11, 2017

Postgres Master-Master replication with BDR

This article will describe steps to get master-master replication for Postgres SQL on Debian Jessie.
Made mostly for glorious Copy-Paste with minor descriptions.
So, we will use BDR (B-Directional Replication) extension from 2ndQadrant.
Installation instructions can be found on page, but I'll describe them also here for Debian Jessie on February 2017 (currently in repos we have Postgres 9.6)
So, installation steps are to this:

1. Backup existing data with pgdump. For sure, use your own databases instead of database1 (On a first node)
pg_dump database1 -f backup_database1.sql

2. Get info about users and rights.

3. Very discussable step here - remove entire existing cluster (I have a problems with versions of bdr_init_copy, doesn't have much time to play around, for me it was easier to recreate all databases)
pg_dropcluster 9.6 main --stop
apt-get remove  postgresql-9.6

4. Install Postgres with BDR extension. (Both nodes)
echo 'deb http://packages.2ndquadrant.com/bdr/apt/ jessie-2ndquadrant main' >>  /etc/apt/sources.list
wget --quiet -O - http://packages.2ndquadrant.com/bdr/apt/AA7A6805.asc | sudo apt-key add -
apt-get update
apt-get install -y postgresql-bdr-9.4 postgresql-bdr-9.4-bdr-plugin php5-pgsql postgresql-bdr-contrib-9.4

5. Modify files in /etc/postgresql/9.4/main/ (Both nodes)
postgresql.conf (Add this to end)
listen_addresses = '*'
shared_preload_libraries = 'bdr'
wal_level = 'logical'
track_commit_timestamp = on
max_connections = 100
max_wal_senders = 10
max_replication_slots = 10

Point - don't forget to protect yourself with iptables - now postgres listens to all interfaces!

pg_hba.conf
hostssl all all x.x.x.x/32 trust # Own IP address
hostssl all all z.z.z.z/32 trust # Second node IP address
hostssl replication postgres x.x.x.x/32 trust # Own IP address
hostssl replication postgres z.z.z.z/32 trust # Second node IP address

6. Restart Postgres (both nodes)
service postgres restart

7. Create database and users for it (both nodes)
CREATE DATABASE database1;
CREATE ROLE database1_user WITH SUPERUSER LOGIN PASSWORD 'SuperPass';
GRANT ALL PRIVILEGES ON DATABASE database1 TO database1_user;

8. Create BDR extension on this database (Both nodes)
\c database1
create extension pgcrypto;
create extension btree_gist;
create extension bdr;
Check it with
SELECT bdr.bdr_variant();
SELECT bdr.bdr_version();
Should be like this
database1=# SELECT bdr.bdr_variant();
 bdr_variant
-------------
 BDR
(1 row)

database1=# SELECT bdr.bdr_version();
    bdr_version
-------------------
 1.0.2-2016-11-11-
(1 row)

9. Create first master node (Only first node!)
SELECT bdr.bdr_group_create(local_node_name := 'node1', node_external_dsn := 'host=<OWN EXTERNAL IP> port=5432 dbname=database1');
Check it with
SELECT bdr.bdr_node_join_wait_for_ready();
Should be like
database1=# SELECT bdr.bdr_node_join_wait_for_ready();
 bdr_node_join_wait_for_ready
------------------------------

(1 row)

10. Create second master node (Only second node!)
SELECT bdr.bdr_group_join(local_node_name := 'node2', node_external_dsn := 'host=<OWN EXTERNAL IP> port=5432 dbname= database1', join_using_dsn := 'host=<NODE1 EXTERNAL IP> port=5432 dbname= database1');
Check with
SELECT bdr.bdr_node_join_wait_for_ready();
Should be same as p.9.

11. Restore database data (any node, but only one of)
psql database1 < backup_database1.sql

Yes, maybe not the best solution, but works for me. But also, if you can't drop your data or other reason you can't delete database, you can use on 2nd node bdr_init_copy, but I'll leave task to do this to you :)

Tuesday, February 7, 2017

Vultr HA. Some additions to existing article.

So, now I've switched from DigitalOcean completely to Vultr. To be honest - mostly of price, cause it's a bit cheaper. And accepts debit cards.
And also had a chance to set up HA on this cloud provider followed on this article.
But some additions to it for complete algo.
1. Order a BGP with support ticket with Vultr provided AS.
2. Purchase a Reserved IP address, but not assign it to none of instances
3. Order 2 instances in same location
4. Set up network on both instances according to article.
 4.1 Use Reserved IP address as dummy1 interface IP
 4.2 Use /32 subnet mask regardless on what you see in Reserved IP settings.
 4.3 Use Bird settings from server instance BGP page (BGP configuration link) not to mistaken with AS addresses and passwords.