IOT Push Button (Like Amazon Dash)

Amazon Dash button is an incredible piece of hardware and another example of “applied engineering” in Amazon, just like Kindle.

The inspiration for this project isn’t directly derived from amazon dash, I wasn’t aware of it until I started digging the internet but at end, I had to set the performance goal to Amazon dash, because no other piece of hardware was up to mark. A detailed teardown of dash button can be found here.

So, Since “decades” I wanted to build some piece of hardware which I could dynamically program for any functionality like playing next youtube video, unlocking door, rebooting a machine, minimizing all open tabs or whatever with a push of a button. The button should be portable and independent of device for operation.

To meet my requirements I had to remove BLE or any other radio-based technology which needed a receiver or additional unit to operate. ESP-12F is power intensive and somewhat large if compared to esp-01.  The end prototype looked like this and worked as expected.

Below is list of components I used.

  • 1x LiPo Battery – 150mAh
  • 1x ESP8266-01
  • 1x 1k Register
  • 1x Tactile Switch Button

You might require a soldering iron with fine tip, basic desoldering skills, access to 3d printer, FTDI board or similar setup.

The schematics of setup would look like below:

The functioning would be like – We turn on ESP using push button, but it would take time to log in to wifi, authenticate – so we need to keep it on for pretty long time – which could be done by using programming output pins, As soon as ESP turn on GPIO2 can be set to HIGH, Once operation is finished it can be set to low. Few challenges which I found on way (with fixes):

  • Power is very limited – So we need to remove LED’s from the esp, this would save a lot of power and would increase battery life by almost a fold!
  • You should use diode and transistor to limit current, since I am not an electronic nerd I would not comment on it.
  • You should set a timeout period, in case you are writing custom logic else battery will drain and you will never come to know why!

After soldering – You have to upload following code which need to be tweaked according to need, but it has basic logic code –

The code can set in hotspot mode if couldn’t connect to wifi and if could connect it would make a connection to mqtt server and make an announcement then shutdown.

I hope you enjoyed the article, it wasn’t a detailed writeup and I dropped many details – but if you have any questions you can comment or drop mail – I will surely help.

Adding a display over network !

You are doomed if your laptop has only one HDMI Port & you are running Linux in that box.

Unlike windows – where there are tons of easy to run solution, & there are still tons of solutions if you have big numbers in your pocket.

Certainly – I neither wish to spend money on external VGA/HDMI extender or docking station nor to change OS. So after googling a lot I discovered a solution which utilized a different machine to act as the streaming client. You can use RPI or an old p4 machine.

Below is how it works.

  1. Adds a virtual display on your machine, usually all graphics card support at least one virtual display.  You do all this using xrandr
  2. Create a VNC Server to stream that display – but since you cannot each time run two commands just to connect display – we are running VNCViewer in listen mode
  3. Connect to vnc viewer & keep running it in the background.

You Laptop —[Display Data]—> Network —-> VNCViewer

As suggested above you data is being streamed over the network  – you cannot run 4k data. But if you have good Lan speed you won’t face any problem.

I have also optimized settings for best experience – so far I can use terminal, watch videos – the only drawback is you feel the lag when you use keyboard or mouse for realtime feedback.

In Client Machine, i.e. your laptop.

You will need to install x11vnc & screen

In server Machine (LAN address 10.0.0.2)

You will need to install VNC Viewer in client side.

Kindle Universal Dashboard

Kindle is Super awesome !

Because of its e-ink display. I wanted a display to present data which has – Least power consumption, Not painful to eyes & obviously one which doesn’t emit blue light.
E-Ink displays fit perfectly to my requirement – acquiring a display which can be driven using Raspberry Pi or Arduino is hard, size/cost ratio is much higher.

On googling I found some display modules which were more or near to 70-80$ – even smaller display – which impulsed me to get a Kindle Touch (8th Generation) at around 74$ approx.

Kindle - MadhurendraKindle comes with an experimental browser but it is narrowed version of WebKit, which is pretty useful if you want to display static content or just want to make a control panel, it can easily render normal websites which use js/CSS & works pretty well. But support for HTML5 is almost absent – so you can’t use WebSockets to push data, using long polling/fast polling is only solutions so far.

Moreover, there was another problem which I had to fix – Kindle has fixed timeout which sends it to sleep mode – for mine it was 10min, after digging I found you can use ~ds or similar but for me, nothing worked.

We can only hope that support to remove timeout or change timeout period will be added in future releases. I think old kindle supports.

If you can’t change timeout or you want to use few other features I suggest you to jailbreak. Follow steps mentioned here http://www.mobileread.com/forums/showthread.php?t=275877 , Don’t jump, It works for kindle touch 8th generation. Tried, tested, working !  For KT3 you will need to install MRPI along with KUAL. Once done your kindle is officially out of warranty 😀 . Post that you need to install USBNet – it will  allow you to ssh to your kindle.  All this will allow you to execute “lipc-set-prop com.lab126.powerd preventScreenSaver 1″ this on the kindle. It will simply disable screensaver. 🙂

Once you have your kindle whose screen doesn’t lock you can simply go & execute  a simple nodejs script to push data.

Note : Kindle doesn’t support WebSocket & none of transport methods in socket.io except “polling”. 

Below is nodejs server code :

Below is code of client.html

Voila – It works !

Below is video of working  in case you want to see demo before getting hands dirty 🙂

Warning & Update : This method might consume more power than expected, as experimental browser has loading status – which continuously refreshes a section of the screen. To overcome this problem I will be polling server with long interval difference – which will be adjustable by the server.

Note for nerds :  Since this method uses browser – it’s more flexible – but if you are possessive about power consumption & screen space – You can use JAVA to develop kindlet application. Lightweight pub/sub protocols like MQTT should help you in the way.


photo_20161030_124622Designing a wall holder : 
You can google for covers or design own or use some double sided foam tape. Since i had access to 3d printer i got two of http://www.thingiverse.com/thing:832273 printed & hung it on the wall – it just helped me in reading few books apart from using it only as display. SWAP!    

Use it as :

  • Scoreboard
  • Notification
  • Weather system
  • Wallpaper slideshow
  • News/RSS feeds display
  • Home automation control
  • Anything
  • Book reader 🙂

At the end, even if you place it behind your monitor it won’t hurt or push the new data to your eyes & spoil the code castle you were building.

Offline Wikipedia with Elasticsearch and MediaWiki

Wikipedia is Awesome ! It’s open, its free – Yea. & its huge in size, millions of articles  but as developer how to exploit the free knowledge.

I started digging internet just to find ways to exploit my fresh 9+ GB of XML Gzipped archive which seemed to me of no use as even a simple text editor can’t open it. (Just out of excitement what’s inside, how its structured, Schema ! )

Luckily people have already imported it. Elasticsearch is fast, reliable & its good for searching, so https://github.com/andrewvc/wikiparse was a saver.

  • Installed elastic search
  • Ran command to import

it took almost 48 hour in an i5, with 8gb ram – where mistake was i used same harddisk for data storage & database. Time might vary.

Data was imported but its still of no use ! Why ? Its in text/wiki format, parses is needed.

After doing search only solution i found was using mediawiki api, which is in PHP there were lots of things missing as its only for mediawiki but not for parsing plain text. (Though i didn’t spend much time in learning internal API)

I quickly downloaded mediawiki, ran nginx with php, installed it & used API.php.
it was good to see my offline API too, but still many things were missing, confusing, API has hard to modify structure. So i created a parse.php

So all steps were :

Email Spoofing – Why its dead !

There was a time, when mail spoofing was an art, was a thing to impress people, was a way to phish attack someone.
With increasing intelligence in spam filters – it became harder, you need good IP reputation to deliver mail to box.
But now it has become almost impossible to spoof address like [email protected] . Why ? Have computer turned intelligent ? No.

The problem of spam protection isn’t new to market. So people came up with DNS based solutions which can allow sender to list IP addresses authorized to send mails.
“Sender Policy Framework (SPF) for Authorizing Use of Domains in E-Mail” – You can read rfc at https://www.ietf.org/rfc/rfc4408.txt (if you want to dig).

The standard was good, Not good it was best! It block all ways to prank people, but mails were still being delivered, because Network administrator weren’t smart enough to add all server. So as workaround big providers ran algorithms on top to make sure genuine mails which are failing spf are not delivered to spam.

This is all good – but for hardcore phishers it became little hard, people do check mails regularly & getting into network is just distributing malware.
Attacker can perform MITM alter content of mail while its being delivered.

There wasn’t any check.

Solution was DKIM – DomainKeys Identified Mail (DKIM) Signatures , it allows all mail servers to sign messages & certain header fields using defined hashing algorithms & verification using public/private key. Public key is published as DNS record, but private key is kept private.

Acquiring private key is little hard. Its hardest thing. You need to regulate keys to make sure that no one cracks it – if you keep key size 2048 it will make mail delivery slow, if you keep it 512bit with present computing its easy to crack.

DKIM provides way to authorize only certain application to send mail, but there was still no way to get reports on how effective is measure, how many mails are being spoofed & what to do with spoofed mails.

Mails were being delivered even after DKIM failure.

People came with DMARC standard – again it was published using DNS TXT record – it helps in getting reports & also blocking mails. Check the rfc at https://datatracker.ietf.org/doc/rfc7489/

Certainly as every security system comes with an overhead, These standard make mail processing resource intensive. There are many ways to reduce processing cost keeping security upto-date.

There were many spamming attacks originating on behalf of our site, post implementation of DMARC using DMARC Plus, they almost reduced 80% after few months.

One thing to note – if you make a single mistake in any of DNS record you can miss all your mails – So its better to take advice from someone who knows Standard well & can help you in deploying. Make sure you go Slow…

Website Optimization – Cache Cache Cache !

You must have heard about cache in web (caches are everywhere in computer science), most times you find it really buggy when changes aren’t reflected as soon as you make them.
For sites with small traffic these things are buggy – but they contribute a major in server traffic when you have a million hits even a thousand.

From server to browser what what we can cache ? & why to cache ?

  1. Enabling code caching – if you are running nodejs server etc things are in your favour because process is running & its already using existing variables still you should make sure that you don’t fetch/store too much data – you should implement cache in your code AND if you are using php or similar scripting language – let me tell you things are really slow – each time you make a request if apache – php thread is spawned, nginx a new thread along with socket is created – PHP code is compiled to opcode & then executed, That’s a lot – You can use opcaches or APC for optimizing php script. Alternatives may follow other languages.
  2. Caching static content – Since static content are not changing every other minute or day – most files are kept as it is for years!, you should tell your server that these contents are rarely changed – Check nginx cache config & Apache cache config
  3. Setting cache expiry header – This one is definitely under you control doesn’t matter if you are in shared hosting or running own server. You should send cache expiry header with all static content. Basically cache expiry tells browser to keep the file for next n days – though it is not strict , browsers send head request to see if file is changed or not.
  4. Offline Storage/WebSQL/Offline Application – Yes, You read it right – You can use offline storage to cache insensitive data on users browser only – this will reduce load on server & data transferred – you can even cache js & css.
  5. CDN – Content delivery networks can also help you a lot in caching – since libraries like jquery, bootstrap etc are so common today if you use CDN your page might not need to load JS & CSS, This is because that file might already exists in browser cache when some other website requested it. You should Thank to other guy – someday other guy will thank you.

Website Optimization – Minfiying Output using PHP

Minification is technique used in which we remove all unnecessary Whitespaces Which include tab, space, newline, carriage return etc. This is done just to reduce data transfer.
Since not everyone is serving a million hits a seconds – minifying html doesn’t help much, Instead enabling GZIP compression is a better technique.

Apart from this – minifying css & js help in reducing number of requests – this reduces number of http connection created to serve a client hence reducing load on server.

These advantages are only a small part of web optimization process but still it is adopted widely – Why ? – Show off !, Make code unreadable, looks cool!.
Though server level optimizations ( like – GZIP, Cache proxy for static content, Apache vs Nginx, CDN, Server location, Number of DNS lookup, Serving content based on user device etc.) work better than just compressing code & uploading.
As an example – there is latency of 400+ ms from India to NY, while 100+ ms for Singapore from India – if we have 10 request per page using singapore server will save 3sec!.

I think i am diverging from main 0bj3ct!v3 of this article. Coming back to minification everyone wants their code to look cool. So recently i’ve been working on college fest site & had idea to minify the code. Did google search & finally came up with following code !

Just including the below code makes the html one-liner & cool.
This is just an application of output buffer, you can do really cool things by handling output buffer.

You can read more about ob_start here, its really interesting.

Note: This code has methods to minify css & js – They are just for reference. I suggest using Grunt with cleancss & uglifyjs, or either. Also you should not use this technique in sites with heavy traffic – it will increase load on server, reduce response time.

Optimal way of scheduling long job – nodejs/js

If you are using nodejs for some backend stuff to schedule some-work, probably you will hate scheduling because of two factors

  1. You can’t set long timeout
  2. Timers are removed when process restart

You would probably favor cronjobs for scheduling – or may be polling database for regular interval. Loading nodejs script every 5min is a costly job – developers understand, so you can’t follow php/short interval scheduling.

First problem can be solved by of setting a recursive call to a function, with time out. Many solutions on web.
But there lies a problem – You can’t cancel the timeout if you needed to after a recursive call. To solve the recursive call problem i wrote few lines of code.

Above code behaves same as setTimeout if you don’t need timer object updates, else you can register for that.

Second problem – Timers are removed when process are restarted. – That shouldn’t be hard – You can create a function to execute atomic functions,
Note atomic – atomic because if your setTimeout code will depend on variables state you will need to load those variables from database which will make job harder – better way is to schedule something like – send email to [email protected] instead of send message to currently loggedin users.

Solution of second problem really depends on your problem – but if you analyze your scheduled jobs closely you will find that they are really atomic, you made them non-atomic !.

if you really need that your jobs are never lost from memory better way is to run a stable nodeserver & use IPC. – but that would be practically hard to maintain.

Backup Server Configurations to Git

When you have multiple servers it pain to remember every configuration and it may take hours to configure servers again incase you need.
It is also impractical to copy each files.

There are few tools available but they come with overhead attached moreover its fun to write custom solutions 🙂

1. Create directory
2. cd into it.
3. Initialize git
4. Add a remote
5. Create branch specifically for that server
6. Check out branch
7. Add shell script
8. Make a commit and push to server.

Content of backup.sh

Then, Don’t forget to generate SSH key adding key to your git server.

Now i can either setup a cron job or add command to daily backup script to update configuration files.
using

from remote.