Blocking IP addresses with ipset

In Chinese Hackers I mentioned that I block ssh attackers with the help of fail2ban. Unfortunately, fail2ban uses iptables to create firewall rules in “Chain f2b-SSH” for each individual IP address. For a modern processor this is no problem, even if you have thousands of these rules. While for a low powered ARM processer this can have a noticeable influence on network performance. In Using Odroid as IP Router I wrote:

Added 10-Jan-2019: I previously added ca. 3000 iptables rules for blocking IP address ranges which attacked me on port 22 (ssh). That many rules will deteriorate your network performance significantly. My download speed went down from 100 MBit/s to 20 MBit/s.Added 10-Jan-2019: I previously added ca. 3000 iptables rules for blocking IP address ranges which attacked me on port 22 (ssh). That many rules will deteriorate your network performance significantly. My download speed went down from 100 MBit/s to 20 MBit/s.

An alternate or additional approach to this slowdown due to iptables is to use ipset, Arch package ipset. ipset maintains hash tables of IP addresses or networks. At the time when I still used the Odroid, then the main sshd-server copied the blocked IP addresses to the Odroid, which in turn populated the hash table with the IP addresses to be blocked.

Setting up ipset and populating the hash tables from fail2ban can be done as below.

sqlite3 -csv /var/lib/fail2ban/fail2ban.sqlite3 "select distinct ip from bips order by ip" |    \
        perl -e 'BEGIN {print "create reisTmp hash:ip family inet hashsize 65536 maxelem 65536\n"; }
                print "add reisTmp $_" while (<>);'     \
        > ~/tmp/reisTmp

The name reisTmp is arbitrary. This way we have a file which looks like this:

create reisTmp hash:ip family inet hashsize 65536 maxelem 65536
add reisTmp 1.186.57.150
add reisTmp 1.193.76.18
add reisTmp 1.2.206.30
add reisTmp 1.202.77.210
add reisTmp 1.214.156.164
add reisTmp 1.214.245.27
. . .

This file is transfered from the sshd/fail2ban-server to our router, if required, and is the input to be run through ipset:

ipset restore -f ~klm/tmp/reisTmp
ipset swap reisTmp reisbauer
ipset destroy reisTmp

This assumes you have created a hash table in ipset called reisbauer. Again, this name is arbitrary. Above scenario assumes that populating this hash table takes some time, then just swap hash tables, which is almost instant.

To re-initiate the hash table after reboot, you use

systemctl enable ipset
systemctl start ipset

This basically does

ipset -f /etc/ipset.conf restore

ASRock DeskMini A300M with AMD Ryzen 3400G

Below are some photographs during assembly of the Asrock A300M with an AMD Ryzen 5 Pro 3400G processor.

The Asrock web-site detailing the specs of the A300M: DeskMini A300 Series.

Three noticable reviews on the A300M:

  1. Anandtech has a very readable review of the A300M: Home> Systems The ASRock DeskMini A300 Review: An Affordable DIY AMD Ryzen mini-PC
  2. A short review from Techspot: Asrock DeskMini A300 Review.
  3. A German review with many photos during assembly: ASRock DeskMini A300 mit AMD Ryzen 5 2400G im Test.

1. CPU. Three photos from the AMD 3400G CPU:

2. Power. The power supply of the A300M will provide at most 19V x 6.32A = 120W.


3. Dimensions. The case has volume of at most two liters, exemplified by the two milk cartons.


4. Mounting. CPU mounted on motherboard.

5. Temperature. I installed a Noctua NH-L9a-AM4 cooler. Running the AMD at full speed with full load shows the following temperature using command sensors:

amdgpu-pci-0300
Adapter: PCI adapter
vddgfx:           N/A
vddnb:            N/A
edge:         +85.0°C  (crit = +80.0°C, hyst =  +0.0°C)

k10temp-pci-00c3
Adapter: PCI adapter
Vcore:         1.23 V
Vsoc:          1.07 V
Tctl:         +85.2°C
Tdie:         +85.2°C
Icore:       100.00 A
Isoc:          9.00 A

nvme-pci-0100
Adapter: PCI adapter
Composite:    +57.9°C  (low  =  -0.1°C, high = +74.8°C)
                       (crit = +79.8°C)

nct6793-isa-0290
Adapter: ISA adapter
in0:                   656.00 mV (min =  +0.00 V, max =  +1.74 V)
in1:                     1.86 V  (min =  +0.00 V, max =  +0.00 V)  ALARM
in2:                     3.41 V  (min =  +0.00 V, max =  +0.00 V)  ALARM
in3:                     3.39 V  (min =  +0.00 V, max =  +0.00 V)  ALARM
in4:                   328.00 mV (min =  +0.00 V, max =  +0.00 V)  ALARM
in5:                   216.00 mV (min =  +0.00 V, max =  +0.00 V)  ALARM
in6:                   448.00 mV (min =  +0.00 V, max =  +0.00 V)  ALARM
in7:                     3.39 V  (min =  +0.00 V, max =  +0.00 V)  ALARM
in8:                     3.30 V  (min =  +0.00 V, max =  +0.00 V)  ALARM
in9:                     1.84 V  (min =  +0.00 V, max =  +0.00 V)  ALARM
in10:                  256.00 mV (min =  +0.00 V, max =  +0.00 V)  ALARM
in11:                  216.00 mV (min =  +0.00 V, max =  +0.00 V)  ALARM
in12:                    1.86 V  (min =  +0.00 V, max =  +0.00 V)  ALARM
in13:                    1.71 V  (min =  +0.00 V, max =  +0.00 V)  ALARM
in14:                  272.00 mV (min =  +0.00 V, max =  +0.00 V)  ALARM
fan1:                     0 RPM  (min =    0 RPM)
fan2:                  2626 RPM  (min =    0 RPM)
fan3:                     0 RPM  (min =    0 RPM)
fan4:                     0 RPM  (min =    0 RPM)
fan5:                     0 RPM  (min =    0 RPM)
SYSTIN:                 +97.0°C  (high =  +0.0°C, hyst =  +0.0°C)  sensor = thermistor
CPUTIN:                 +87.5°C  (high = +80.0°C, hyst = +75.0°C)  ALARM  sensor = thermistor
AUXTIN0:                +62.0°C  (high =  +0.0°C, hyst =  +0.0°C)  ALARM  sensor = thermistor
AUXTIN1:                +94.0°C    sensor = thermistor
AUXTIN2:                +90.0°C    sensor = thermistor
AUXTIN3:                +86.0°C    sensor = thermistor
SMBUSMASTER 0:          +85.0°C
PCH_CHIP_CPU_MAX_TEMP:   +0.0°C
PCH_CHIP_TEMP:           +0.0°C
PCH_CPU_TEMP:            +0.0°C
intrusion0:            OK
intrusion1:            ALARM
beep_enable:           disabled

Fully loaded:

  1  [||||||||||||||||||||||||||||||||||||||||                            52.9%]   Tasks: 120, 405 thr; 5 running
  2  [||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||99.4%]   Load average: 7.28 6.82 6.02
  3  [||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||99.4%]   Uptime: 8 days, 07:27:51
  4  [||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||94.1%]
  5  [|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||100.0%]
  6  [||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||  88.2%]
  7  [|||||||||||||||||||||||||||||||||                                   44.2%]
  8  [|||||||||||||||||||||||||||||||||||||||                             50.6%]
  Mem[|||||||||||||||||||||||||||||||||||||||||||||||||||||||       28.7G/60.8G]
  Swp[                                                                    0K/0K]

Reply to: Neural Network Back-Propagation Revisited with Ordinary Differential Equations

This article is worth reading: Neural Network Back-Propagation Revisited with Ordinary Differential Equations

I replied:

Thank you very much for this very informative article providing many links, the Python code, and the results.

According the mentioned paper from Owens + Filkin the speedup expected by using a stiff ODE solver should be two to 1,000. So your results demonstrate that the number of iterations clearly is way less than for gradient descent and all its variants. Unfortunately, the run times reported are way slower than for gradient descent. This was not expected.

The following factors could play a role in this:

  1. All the methods search for a local minimum (gradient should be zero, Hessian is not checked or known). They do not necessarily find a global minimum. So when these different methods are run, they each probably iterate towards different local minima. So each methods likely has done something different.
  2. I wonder why you have used the zvode solver from scipy.integrate. I would recommend vode, or even better lsoda. Your chosen tolerances are quite high (atol=1e-8, rtol=1e-6), at least for the beginning. These high tolerances may force smaller step-sizes than actually required. In particular, as the start-values are random, there seems to be no compelling reason to use these strict tolerances right from the start. Also it is known, that strict tolerances might lead the ODE code to use higher order BDF which in turn are not stable enough for highly stiff ODEs. Only BDF up to order 2 are A-stable. So atol=1e-4, rtol=1e-4 might show different behaviour.
  3. Although it is expected that the resulting ODE is stiff, it might be the case that in your particular setting the system was only very mildly stiff. Your charts give some indications of that, at least in the beginning. This can be checked by simply re-running with a non-stiff code, e.g., like dopri5. Again with lower tolerances.
  4. As can be seen in your chart, above learning accuracy 80% the ODE solver takes a huge number of steps. It would be interesting to know whether at this stage there are many Jacobian evaluations for the ODE solver, or whether there are many rejected steps due to Newton convergence failures within the ODE code.
  5. scipy.integrate apparently does not allow for automatic differentiation. Therefore the ODE solver must resort to numerical differencing for evaluating the Jacobian, which is very slow. So using something like Zygote might improve here.

I always wondered why the findings from Owens+Filkin where not widely adopted. Your paper provides an answer, although a negative one. Taking into
account the above points, I still have hope that stiff ODE solvers have great potential for machine learning with regard to performance. You already mentioned that ODE solvers provide the benefit that hyperparameters no longer need to be estimated.

Calling C from Julia

Two ways to compute the error function or Bessel function in Julia.

1. Calling C. On UNIX libm provides erf() and j0(). So calling them goes like this:

ccall(("erf","libm.so.6"),Float64,(Float64,),0.1)
ccall(("j0"),Float64,(Float64,),3)

In this case one can omit the reference to libm.so. Watch out for the funny looking (Float64,).

2. Using Julia. SpecialFunctions.jl provides erf() and besselj0.

import Pkg
Pkg.add("SpecialFunctions")
import SpecialFunctions
SpecialFunctions.erf(0.1)
SpecialFunctions.besselj0(3)

Splitting and anti-merging vCard files

Sometimes vCard files need to be split into smaller files, or the file needs to be protected against merging in another application.

1. Splitting. Below Perl script splits the input file into as many files as required. Output files are named adr1.vcf, adr2.vcf, etc. You can pass a command line argument “-n” to specify the number of card records per file. Splitting a vCard file is provided in palmadrsplit on GitHub:

use Getopt::Std;

my %opts;
getopts('n:',\%opts);
my ($i,$k,$n) = (1,0,950);
$n = ( defined($opts{'n'}) ? $opts{'n'} : 950 );

open(F,">adr.$i.vcf") || die("Cannot open adr.$i.vcf for writing");
while (<>) {
        if (/BEGIN:VCARD/) {
                if (++$k % $n == 0) {   # next address record
                        close(F) || die("Cannot close adr.$i.vcf");
                        ++$i;   # next file number
                        open(F,">adr.$i.vcf") || die("Cannot open adr.$i.vcf for writing");
                }
        }
        print F $_;
}
close(F) || die("Cannot close adr.$i.vcf");

This is required for Google Contacts, as Google does not allow to import more than 1,000 records per day, see Quotas for Google Services.

2. Anti-Merge. Inhibiting annoying merging is given in file palmantimerge on GitHub. Overall logic is as follows: Read entire vCard file and each card, delimited by BEGIN:VCARD and END:VCARD, is put on a hashmap. Each hashmap entry is a list of vCards. Hash key is the N: entry, i.e., the concatentation of lastname and firstname. Once everything is hashed, then walk through hash. Those hash entries, where the list contains just one entry, can be output as is. Where the list contains more than one entry, then these entries would otherwise be merged, and then the N: part is modified by using the ORG: field.

use strict;
my @singleCard = ();    # all info between BEGIN:VCARD and END:VCARD
my ($name) = "";        # N: part, i.e., lastname semicolon firstname
my ($clashes,$line,$org) = (0,"","");
my %allCards = {};      # each entry is list of single cards belonging to same first and lastname, so hash of array of array

while (<>) {
        if (/BEGIN:VCARD/) {
                ($name,@singleCard) = ("", ());
                push @singleCard, $_;
        } elsif (/END:VCARD/) {
                push @singleCard, $_;
                push @{ $allCards{$name} }, [ @singleCard ];
        } else {
                push @singleCard, $_;
                $name = $_ if (/^N:/);
        }
}

for $name (keys %allCards) {
        $clashes = $#{$allCards{$name}};
        for my $sglCrd (@{$allCards{$name}}) {
                if ($clashes == 0) {
                        for $line (@{$sglCrd}) { print $line; }
                } else {
                        $org = "";
                        for $line (@{$sglCrd}) {
                                $org = $1 if ($line =~ /^ORG:([ \-\+\w]+)/);
                        }
                        for $line (@{$sglCrd}) {
                                $line =~ s/;/ \/${org}\/;/ if ($line =~ /^N:/);
                                print $line;
                        }
                }
        }
}

Every lastname is appended with “/organization/” if the combination of firstname and lastname is not unique. For example, two records with Peter Miller in ABC-Corp and XYZ-Corp, will be written as N:Miller /ABC-Corp/;Peter and N:Miller /XYZ-Corp/;Peter.

This way Simple Mobile Tools Contacts will not merge records together which it shouldn’t. Issue #446 for this is on GitHub.

Automated Rebooting of Auerswald Communication System

The wired telephones in my house are connected to a telephone-system from Auerswald. This PBX handles VoIP and ISDN. My children make fun of me that I still use landlines, they just use cell phones.

Unfortunately since a couple of months the system no longer is fully reliable and needs constant reboots, for unknown reasons. I deleted the entire call history, in the hope that this reduced storage would alleviate the problem, but this did not help. So I had to automate the reboots. The script below mimics the login-screen and the reboot-screen, as shown below. To figure out the details of the login screen I used the network analyzer of Firefox to see which URL and which commands are sent to the web-server.

Script is:

# Login
curl -s -o AW_Login.html -d LOGIN_NOW=true -d jssupport=true -d timeout=200 -d LOGIN_NAME=admin -d LOGIN_PASS=SecretPasswd -d Anmelden=Anmelden -c AW_cookieJar http://tk/login

sleep 2

# Ask for reboot
curl -s -o AW_Reboot.html -b AW_cookieJar -d reboottimevalue=0 -d rebootpbx=Neustart 'http://tk/updownload?timereboot_PBX=1&reboottime=0'

The telephone-system has DNS name tk. This name is arbitrary.

This script is then run via cron.

Final remark: Unfortunately, even after daily rebooting, the telephone system still is misbehaving on a random basis.

Hosting Static Content with netlify.app

Many people associate netlify.com with its integration to GitHub, GitLab or Bitbucket. But you can just deploy your local files to Netlify as well.

Install netlify command via npm install netlify. cd to the directory where you locate your static content. Login to Netlify site via

netlify login

This will either directly open a browser, where you confirm the login, or enter your credentials. If the browser is not opened, then enter the URL given on console. Once you are logged in and have configured your domain name in the Netlify administration menu, you can deploy. For this run below command

netlify deploy --prod -d .

Netlify has the notion of two environments: preview and production. You can skip the preview environment and deploy directly to production.

The netlify command offers the following options.

$ netlify -h
Netlify command line tool

VERSION
  netlify-cli/2.48.0 linux-x64 node-v13.13.0

USAGE
  $ netlify [COMMAND]

COMMANDS
  addons     (Beta) Manage Netlify Add-ons
  api        Run any Netlify API method
  build      (Beta) Build on your local machine
  deploy     Create a new deploy from the contents of a folder
  dev        Local dev server
  functions  Manage netlify functions
  help       display help for netlify
  init       Configure continuous deployment for a new or existing site
  link       Link a local repo or project folder to an existing site on Netlify
  login      Login to your Netlify account
  open       Open settings for the site linked to the current folder
  plugins    list installed plugins
  sites      Handle various site operations
  status     Print status information
  switch     Switch your active Netlify account
  unlink     Unlink a local folder from a Netlify site
  watch      Watch for site deploy to finish

The netlify script stores your credentials in $HOME/.netlify/config.json.

Hosting Static Content with now.sh

now.sh, previously known under zeit.co, which has now rebranded as vercel.com, allows to host static content. There is no PHP, MySQL/MariaDB, Perl, CGI, etc. While surge.sh is super simple to use, in contrast now.sh uses the notion of ‘environment’, which can be either development, preview, or production.

First install via npm install now, then cd to the directory where you have stored your static content. Then, depending on your environment, deployment is as follows:

  1. Development: now dev, no “real” deployment, but rather web-server is started at localhost:3000. Stop with Ctrl-C.
  2. Preview: now or now deploy
  3. Production: now deploy --prod

It is not required to step through all the environments in any order. So you can just always deploy to production. Regardless of the environment, now.sh always creates some additional HTML web area with a generated name. This name might look like now-ll1keyjfk.now.sh.

The now command has the following options:

$ now -h                                                                                                           
> UPDATE AVAILABLE Run `npm i now@latest` to install Now CLI 18.0.0                                                                                
> Changelog: https://github.com/zeit/now/releases/tag/now@18.0.0                                                                                   
                                                                                                                                                   
  𝚫 now [options] <command | path>                                                                                                                 
                                                                                                                                                   
  Commands:                                                                                                                                        
                                                                                                                                                   
    Basic                                                                                                                                          
                                                                                                                                                   
      deploy               [path]      Performs a deployment (default)                                                                             
      dev                              Start a local development server                                                                            
      init                 [example]   Initialize an example project                                                                               
      ls | list            [app]       Lists deployments                                                                                           
      inspect              [id]        Displays information related to a deployment                                                                
      login                [email]     Logs into your account or creates a new one                                                                 
      logout                           Logs out of your account                                                                                    
      switch               [scope]     Switches between teams and your personal account                                                            
      help                 [cmd]       Displays complete help for [cmd]                                                                            
                                                                                                                                                   
    Advanced                                                                                                                                       
                                                                                                                                                   
      rm | remove          [id]        Removes a deployment                                                                                        
      domains              [name]      Manages your domain names                                                                                   
      dns                  [name]      Manages your DNS records                                                                                    
      certs                [cmd]       Manages your SSL certificates                                                                               
      secrets              [name]      Manages your secret environment variables                                                                   
      logs                 [url]       Displays the logs for a deployment                                                                          
      teams                            Manages your teams                                                                                          
      whoami                           Shows the username of the currently logged in user                                                          
                                                                                                                                                   
  Options:

    -h, --help                     Output usage information
    -v, --version                  Output the version number
    -V, --platform-version         Set the platform version to deploy to
    -n, --name                     Set the project name of the deployment 
    -A FILE, --local-config=FILE   Path to the local `now.json` file
    -Q DIR, --global-config=DIR    Path to the global `.now` directory
    -d, --debug                    Debug mode [off]
    -f, --force                    Force a new deployment even if nothing has changed
    -t TOKEN, --token=TOKEN        Login token
    -p, --public                   Deployment is public (`/_src` is exposed)
    -e, --env                      Include an env var during run time (e.g.: `-e KEY=value`). Can appear many times.
    -b, --build-env                Similar to `--env` but for build time only.
    -m, --meta                     Add metadata for the deployment (e.g.: `-m KEY=value`). Can appear many times.
    -C, --no-clipboard             Do not attempt to copy URL to clipboard
    -S, --scope                    Set a custom scope
    --regions                      Set default regions to enable the deployment on
    --prod                         Create a production deployment

  > NOTE: To view the usage information for Now 1.0, run `now help deploy-v1`

  Examples:

  – Deploy the current directory

    $ now

  – Deploy a custom path

    $ now /usr/src/project

  – Deploy with environment variables

    $ now -e NODE_ENV=production -e SECRET=@mysql-secret

  – Show the usage information for the sub command `list`

    $ now help list

now.sh stores your credentials in $HOME/.local/share/now/auth.json.

Hosting Static Content with surge.sh

When you want totally hassle free hosting of static HTML then surge.sh is very attractive. It is easy to set-up and free of charge for most private users. It offers https out of the box from sectigo.com. It does not offer PHP, MySQL/MariaDB, CGI, Perl, etc. Just static HTML with CSS, JavaScript, images, etc. Your static content will be hosted on Your_Domain.surge.sh.

Steps to follow:

  1. Install surge: npm install surge
  2. cd to your directory with static content: Type surge

It cannot be easier. If you do not want to enter the domain name over and over again, you can store this chosen domain name in file CNAME and you won’t be asked the next time:

echo Your_Domain > CNAME

The surge command offers the following options.

$ surge --help

  surge – single command web publishing. (v0.21.3)

  Usage:
    surge <project> <domain>

  Options:
    -a, --add           adds user to list of collaborators (email address)
    -r, --remove        removes user from list of collaborators (email address)
    -V, --version       show the version number
    -h, --help          show this help message

  Additional commands:
    surge whoami        show who you are logged in as
    surge logout        expire local token
    surge login         only performs authentication step
    surge list          list all domains you have access to
    surge teardown      tear down a published project
    surge plan          set account plan

  Guides:
    Getting started     surge.sh/help/getting-started-with-surge
    Custom domains      surge.sh/help/adding-a-custom-domain
    Additional help     surge.sh/help

  When in doubt, run surge from within your project directory.

Your e-mail and encrypted password are stored in $HOME/.netrc.

Performance Comparison Pallene vs. Lua 5.1, 5.2, 5.3, 5.4 vs. C

Installing Pallene is described in the previous post: Installing Pallene Compiler. In this post we test the performance of Pallene versus C, Lua 5.4, and LuaJIT.

1. Array Access. I checked a similar program as in Performance Comparison C vs. Lua vs. LuaJIT vs. Java.

function lua_perf(N:integer, S:integer)
        local t:{ {a:float, b:float, f:float} } = {}

        for i = 1, N do
                t[i] = {
                        a = 0.0,
                        b = 1.0,
                        f = i * 0.25
                }
        end

        for j = 1, S-1 do
                for i = 1, N-1 do
                        t[i].a = t[i].a + t[i].b * t[i].f
                        t[i].b = t[i].b - t[i].a * t[i].f
                end
                --io_write( t[1].a )
        end
end

This program, which does no I/O at all, runs in 0.14s, and therefore runs two times slower than the LuaJIT, which finishes in 0.07s. This clearly is somewhat disappointing. Lua 5.4, as part of Pallene, needs 0.75s. So Pallene is roughly five times faster than Lua.
Continue reading

Installing Pallene Compiler

Pallene is a Lua based language. In contrast to Lua, which is untyped, Pallene is typed. A good paper on Pallene is “Pallene: A companion language for Lua”, by Hugo Musso Gualandi, and Roberto Ierusalimschy.

From above paper:

The compiler itself is quite conventional. After a standard parsing step, it converts the program to a high-level intermediate form and from that it emits C code, which is then fed into a C compiler such as gcc.

From “A gradually typed subset of a scripting language can be simple and efficient”:

Pallene was designed for performance, and one fundamental part of that is that its compiler generates efficient machine code. To simplify the implementation, and for portability, Pallene generates C source code instead of directly generating assembly language.

So, very generally, this idea is similar to f2c (Fortran to C), cobc (Cobol compiler), or Lush (Lisp Universal SHell).

The whole Pallene compiler is implemented in less than 7 kLines of Lua, and less than 1 kLines of C source code for the runtime.

To install Pallene compiler you need git, gcc, lua, and luarocks. Description is for Linux. MacOS is very similar.

1. Source. Fetch source code via git clone.

$ git clone https://github.com/pallene-lang/pallene.git 
Cloning into 'pallene'...
$ cd pallene

2. Rocks. Fetch required Lua rocks via luarocks command.

$ luarocks install --local --only-deps pallene-dev-1.rockspec
Missing dependencies for pallene dev-1:                                                                                                                 
   lpeglabel >= 1.5.0 (not installed)                                                                                                                   
   inspect >= 3.1.0 (not installed)                                                                                                                     
   argparse >= 0.7.0 (not installed)                                                                                                                    
   luafilesystem >= 1.7.0 (not installed)                                                                                                               
   chronos >= 0.2 (not installed)                                                                                                                       
                                                                                                                                                        
pallene dev-1 depends on lua ~> 5.3 (5.3-1 provided by VM)                                                                                              
pallene dev-1 depends on lpeglabel >= 1.5.0 (not installed)                                                                                             
Installing https://luarocks.org/lpeglabel-1.6.0-1.src.rock                                                                                              
                                                                                                                                                        
lpeglabel 1.6.0-1 depends on lua >= 5.1 (5.3-1 provided by VM)
gcc -O2 -fPIC -I/usr/include -c lpcap.c -o lpcap.o
gcc -O2 -fPIC -I/usr/include -c lpcode.c -o lpcode.o
gcc -O2 -fPIC -I/usr/include -c lpprint.c -o lpprint.o
gcc -O2 -fPIC -I/usr/include -c lptree.c -o lptree.o
gcc -O2 -fPIC -I/usr/include -c lpvm.c -o lpvm.o
gcc -shared -o lpeglabel.so lpcap.o lpcode.o lpprint.o lptree.o lpvm.o
No existing manifest. Attempting to rebuild...
lpeglabel 1.6.0-1 is now installed in /home/klm/.luarocks (license: MIT/X11) 

pallene dev-1 depends on inspect >= 3.1.0 (not installed)
Installing https://luarocks.org/inspect-3.1.1-0.src.rock

inspect 3.1.1-0 depends on lua >= 5.1 (5.3-1 provided by VM)
inspect 3.1.1-0 is now installed in /home/klm/.luarocks (license: MIT <http://opensource.org/licenses/MIT>)

pallene dev-1 depends on argparse >= 0.7.0 (not installed)
Installing https://luarocks.org/argparse-0.7.0-1.all.rock

argparse 0.7.0-1 depends on lua >= 5.1, < 5.4 (5.3-1 provided by VM)
argparse 0.7.0-1 is now installed in /home/klm/.luarocks (license: MIT)

pallene dev-1 depends on luafilesystem >= 1.7.0 (not installed)
Installing https://luarocks.org/luafilesystem-1.8.0-1.src.rock

luafilesystem 1.8.0-1 depends on lua >= 5.1 (5.3-1 provided by VM)
gcc -O2 -fPIC -I/usr/include -c src/lfs.c -o src/lfs.o
gcc -shared -o lfs.so src/lfs.o
luafilesystem 1.8.0-1 is now installed in /home/klm/.luarocks (license: MIT/X11)

pallene dev-1 depends on chronos >= 0.2 (not installed)
Installing https://luarocks.org/chronos-0.2-4.src.rock

chronos 0.2-4 depends on lua >= 5.1 (5.3-1 provided by VM)
gcc -O2 -fPIC -I/usr/include -c src/chronos.c -o src/chronos.o -I/usr/include
gcc -shared -o chronos.so src/chronos.o -L/usr/lib -Wl,-rpath,/usr/lib -lrt
chronos 0.2-4 is now installed in /home/klm/.luarocks (license: MIT/X11)

Stopping after installing dependencies for pallene dev-1

3. Environment variables. Make sure that you source the environment variables given by

luarocks path

For example:

export LUA_PATH='/usr/share/lua/5.3/?.lua;/usr/share/lua/5.3/?/init.lua;/usr/lib/lua/5.3/?.lua;/usr/lib/lua/5.3/?/init.lua;./?.lua;./?/init.lua;/home/klm/.luarocks/share/lua/5.3/?.lua;/home/klm/.luarocks/share/lua/5.3/?/init.lua'
export LUA_CPATH='/usr/lib/lua/5.3/?.so;/usr/lib/lua/5.3/loadall.so;./?.so;/home/klm/.luarocks/lib/lua/5.3/?.so'
export PATH='/home/klm/.luarocks/bin:/usr/bin:/home/klm/bin:...:.

4. Build Lua and runtime. Build Lua and the Pallene runtime (you are still in the pallene directory):

make linux-readline

Some warnings will show up for Lua, but they can be ignored for now.

5. Run compiler. Now you can run pallenec, provided you still are in the same directory, where you built pallene.

$ ./pallenec   
Usage: pallenec [-h] [--emit-c] [--emit-asm] [--compile-c]
       [--dump {parser,checker,ir,uninitialized,constant_propagation}]
       <source_file>

Error: missing argument 'source_file'

6. Run example. Now check one of the examples.

$ pallenec examples/factorial/factorial.pln
$ ./lua/src/lua -l factorial examples/factorial/main.lua 
The factorial of 5 is 120.

The most common error will be to not use the lua/src/lua command from Pallene, but rather the system-wide.

You can compile all examples and benchmarks:

for i in examples/*/*.pln; do pallenec $i; done
for i in benchmark/*/*.pln; do pallenec $i; done

Things to note in Pallene:

  1. Array indexes must start at one
  2. Pallene source code, except type-, record-definition or variable definitions, must be within a function
  3. Pallene offers no goto statement. The goto statement was added in Lua 5.2.