offset \ˈȯf-ˌset\ noun

a force or influence that makes an opposing force ineffective or less effective

Mellow - Trello to Coggle Converter

I wanted to figure out how to set up serverless things so I opted for a simple application, although it turned out the architecture of it all had a lot of moving parts. The application in question is taking the Trello board JSON export and converting it to an MM file for Coggle.

Let's say you are primarily using Trello to organize your to-do list. It's a great tool with a lot of features, why not use it. But you are more of a visual type, and seeing it in a color-coded diagram could help you prioritize and group up the tasks more efficiently. If that's the case, this small app would competently meet that very specific need.

It's as simple as going to a Trello board settings and exporting it as JSON.


/media/images/export.png

You can either save a file to disk from your browser or copy the JSON content to a clipboard. With it, you can go to the service I made and put either of those there and press the "Convert" button.


/media/images/convert.png

It will give you an MM file for download that you can save and when you later open up Coggle, you can create a new diagram and drag and drop the file there.


/media/images/diagram.png

The result is your Trello board in a new, mind map, view.


/media/images/diagram2.png

Simple as that, without any hassle of giving permissions and any info to the app. I called the converter from Trello board to MM for Coggle, Mellow.

Why I actually went for it is that we've suddenly found ourselves in the process of looking to buy an apartment, and moving things on the screen and seeing them presented in different ways actually helps a lot with establishing previously unseen patterns and connections. Both Trello and Coggle came in handy, and as I am a tinkering IT guy, Mellow came to be as well.

That was really the main reason. The other reason is taking my mind off from the fruitless pursuit of finding a space we would actually pay to own, rather than tolerate to rent.

But back to Mellow. I made a primitive SDK and CLI for converting the Trello export to Coggle so it was just a matter of wrapping it in a web service.

Now on to the boring part about the architecture. The front-end part is a simple static app that just features plain old HTML and some new JavaScript for modern browsers. It is jQuery, though, since jQuery comes as a requirement for the Bootstrap components that the front-end is made with. It made sense to keep it minimal, but I had to include a bigger version of jQuery to compensate for the CORS calls to a different service. I put the Trello to Coggle front-end repository on GitHub in the end since it allows for the deployment previews in Netlify as opposed to Bitbucket that I'm usually working with. Vesna helped around a lot with the parts I didn't know how to make, but had a rough idea. Netlify is an option that came recommended from a friend since you can specify the security HTTP headers properly with it.

Having the front-end set up, I had to have something that it could communicate with. I made a small Flask web application that takes a text string as an argument via form submit and gives back the JSON in response. If successful, it will have the base64 string that is used to construct an MM file in the browser. I didn't want to deal with file management in the transfer. The application imports the SDK and CLI script as a dependency and it worked properly with the front-end communicating the data it needed. A small set of tests were added to it to make sure it responds correctly.

Since it's Python all the way, I opted for Zappa to create a serverless AWS Lambda function out of the Flask application. The resulting URL of the function is added as the form action attribute in the front-end of the service. Of course, I've set up the custom subdomain for it on the existing one I already own. The FaaS repository for Trello to Coggle converter is hosted on the Bitbucket and I'm deploying it with their Pipelines making sure that the tests it has pass before deployment.

Deploying to Netlify is just a push away with the settings in the netlify.toml file. It doesn't have a specific build since I didn't want to overengineer it. Deploying to AWS is described in the bitbucket-pipelines.yml file and is pretty straightforward.

To recap what I had problems with, it was definitely debugging the security headers. I thought that I've cleared everything up so it's not inline, but apparently extensions can throw you off. You'd need to run your browser in the safe mode to see the proper output. The other thing is pointing the custom domains from your DNS (Digital Ocean in my case) to Netlify and AWS. While that didn't present a problem because you just follow the instructions for adding a custom domain and issue a CNAME redirect on your DNS, getting certification, so you can use HTTPS, in AWS only works in the us-east-1 region, the Virginia center. I couldn't register a certificate in my preferred region. I had to resort to that other one for the moment.

Regarding costs, the service falls in the free tier in every part since it's not like it's going to be overly used. I also disabled keep_warm in Zappa because it's not like the service needs to be up all the time. I didn't want to add the auth mechanism because that would require me to dabble with the user info and the GDPR compliance. It could be useful to connect the apps and just negotiate the content, but it's not necessary for now. I just wanted to try out the FaaS part. The domain I already own so it was just a question of adding the subdomain to the service. The service could benefit from additional tests and analytics as well as monitoring, however, it would be an overkill at this stage.

I hope some of you will find Mellow, the Trello to Coggle converter useful. If you do, but find it can be improved, speak up.

Tile and Slice Plug-in for Krita and Leaflet

The Krita plug-in for Leaflet in question is identical to the one I made for GIMP. I described the GIMP Leaflet plug-in in an earlier post. It takes an image, scales it up according to the zoom level in question, crops it so it's a perfect square and then starts slicing it into tiles.

The tiles themselves are used by JavaScript Leaflet library or whichever library that needs tiles, but Leaflet is supposed to be the reference. It saves the resulting tiles in a folder of your choice with a structure that you can use with Leaflet.

Krita has Python scripting support since version 4.0. It also uses QT to build an interface which can be made with PyQT.

I decided to port what I had in GIMP to Krita and learn something new along the way.

Since the original plug-in was made, there was one improvement requested where I added the maximum JPEG resolution possible. It's still a drain on the resources but that's the nature of the algorithm and you better not go that high if your machine can't support it. Roughly put, the maximum size of a single JPEG can be something like 16 GBs or so and it will kill your PC when it starts processing. Go easy on that. I somehow doubt the average user has a supercomputer.

You install it by extracting the contents of the folder to the: ~/.local/share/krita/pykrita on Linux distribution or in the current user's AppData\Roaming\krita\pykrita on Windows. If you're not sure which folder you are supposed to save the plug-in in, in Krita, in the menu bar, you can go to Settings -> Manage Resources and press the Open Resource Folder button in the following window. Make sure to restart Krita after you put the plug-in in the right place.

The contents look like this:

krita-leaflet
|-- krita_leaflet
|   |-- __init__.py
|   |-- app.py
|   |-- krita_leaflet.py
|   `-- widget.py
|-- krita_leaflet.desktop
|-- LICENSE
`-- README.rst

It's mostly Python code except the .desktop file which defines the plug-in metadata for Krita like an .ini file.

The module with __init__.py is important and expected to import the subclassed Extension from somewhere. It's all Python afterwards. Since libkis is a C++ library originally and it's just exposed in Python, the module can rely on getters, setters and camelCase. You have to live with that and ignore the PEP8 recommended syntax. It works, though.

The workflow is that you go to Tools -> Scripts -> Krita - Leaflet. It will work on the currently open flattened image. Pick the zoom level and the output folder and wait for it to finish. Depending on the zoom level you picked, you can go to lunch, have a coffee or something. The status bar of the resulting window is showing the current point it's at so you know it's working, but the algorithm has high complexity so be patient.

You can grab the Krita-Leaflet plug-in from the repo. Pull requests are, as always, welcome.

2018 Recap

A year has gone by already and looking back it was quite eventful so I wanted to recap the highlights for the fun of it.

We traveled Europe with what days off we had at disposal and went to:

  • Croatia because we wanted to see family and friends
  • Iceland during the winter, where we saw Aurora Borealis and geothermal fields with geysers, and almost lost our fingers from taking pictures without gloves
  • Germany to see the valley of Rhine from Frankfurt
  • Spain to see Madrid and the surrounding small cities (Alcalá de Henares, Toledo, El Escorial)
  • Scotland to see Edinburgh
  • England to see Bristol, Bath and Stonehenge (so Vesna could take a picture in front of the house where Jane Austen once lived)
  • Malta, which we covered quite extensively (because it's small), where we saw some nice megalithic structures and the less known, but still standing, Blue Grotto natural sea arch

We also roamed across Ireland with our friends and visited:

  • Malahide and some beaches around Dublin
  • Tayto amusement park north of Dublin for some thrills
  • Sugarloaf mountain, Powerscourt gardens and Blessington greenway south of Dublin
  • Arklow on the way to Wexford and Waterford
  • Aran islands (Inis Mór), as well as caves with stalactites in the Burren
  • Cork and Limerick, which are both beautiful

Concerts:

  • Incubus, which was planned. It only took us 15 years to see them live since going together to see a cover band
  • Why?, which was not planned, but was great none the less. They had a 10 year anniversary of their Alopecia album

We've been the best man and maid of honor to our best man and maid of honor. Woohoo.

I did an Inktober challenge as preparation for drawing characters in a video game (we may or may not be making); inking for the whole month of October every day. I produced a number of drawings and I'm slowly getting back into it after a long hiatus.

I changed jobs earlier this year and am now contributing to helping people with what I know. My new job is in IT health sector.

Vesna gained confidence in her career switch to front-end coding and got a job as well.

I dared to climb an artificial wall. I think I could do it recreationally, but I'm yet to figure out how to do it, if at all.

I published these articles semi-regularly. It's OK. I do need to push myself to get them out more frequently.

I replaced the cameras in my cell phone. I talked about the Fairphone earlier. It was easy enough to replace them and I needed to do it since I was taking photos a lot because of all the trips.

Because of the photos, I found a way to enrich the GoPro photos with geotags through my cell phone GPS and also made and published a photo tagging script that outputs KML. It creates placemarks for it on specific spatial and temporal coordinates with the photo attached. It figures out what's in the photo with Keras library and puts the results as a name for the placemark. The KML file can then be opened with Google Earth so I can keep track of where my photos were taken.

We visited Octocon which was enjoyable if a bit predictable. We also got Worldcon tickets. It's coming next year to Ireland, which is convenient.

We visited a Marxism conference to see how the community thinks.

We visited Porterhouse brewery and saw the beer making process.

We hosted some guests and went to the the usual sightseeing spots in Dublin with them.

We celebrated 15 years of our relationship and four years of marriage... and four years of being in Ireland.

We used our anniversaries as an excuse to buy new toys: we bought PlayStation 4 to play the exclusives and a projector so we don't need to go to the cinema anymore. To be honest, we've been talking about doing it for five years now.

We did go to the cinema with usual frequency, but this year didn't really see anything that stuck with us. I guess we hit a rut.

We're still playing pen and paper games with two parties over Skype. It's been a couple of years now.

We're still donating blood regularly and we didn't have any serious illnesses throughout the year.

We kept the apartment in order, replaced the window locking mechanism, bought a new table.

I made a GreaseMonkey script for Reddit so I could color code posts from the favorite subreddits.

Regarding other projects I have active, I kept the code in order and did minor upgrades like adding SSL on the OffSetLab because browsers are forcing me to, even though the site is static.

Bought Lazy Nezumi Pro. It's a great piece of software.

Reported a bug on Krita.

Wrote this recap. Fingers crossed that the next year is going to be great, too.

Geotagging Photos without GPS Enabled Camera

What happens when you have a camera that has no GPS, but you still want to attach spatial coordinates to the produced images? This post builds upon the last series of posts that dealt with displaying the spatial photo coordinates on Google Earth and people should be aware of the dangers of EXIF tags beforehand.

Some years ago I bought GoPro Hero 4 that doesn't have the GPS module. Newer models have it, but Hero 4, and a lot of other cameras don't. This puts me in a position of having the following use case: in possession of a number of photos that are not spatially tagged. For the solution to work, the timestamp function on the camera needs to be properly set up. I frequently forget to sync up the time on my camera when switching timezones, so this is a reminder that it needs to be checked. The prerequisite is that, at the very least, your camera needs to support tagging the images with timestamps. Well, that and a smartphone handy.

In case of Hero 4, I can sync up with Android and the date/time can be adjusted accordingly.

Assuming you have your camera ready and the timestamps work on it, you can take photos whenever and however you like, but, before you go wild with it, you need to somehow keep track of where you are. Nowadays, most of us own some sort of smartphone that has integrated GPS, and I'm guessing that the majority of the smartphones in question have a version of Android OS installed on it. You can always take the approach detailed in using Google Location history to enrich the photos with geotags; however, this cannot work for me since I turned off the location history in the first place some years ago, which is where the GPSLogger app comes in play for me.

Get the GPSLogger from the Play Store or wherever you get your apps from and start it up to periodically log the coordinates in a file. You keep your phone near you, so it will perfectly tag your location within the reach of your camera. Because it tracks GPS coordinates, it strains your battery, so keep that in mind. I was in Cork, Ireland: 51.897222; -8.47, last weekend so I got up in the morning before the trip and set the logging up with the default settings.


/media/images/cork.jpeg

My battery doesn't have a big capacity (2420 mAh), but I don't use the phone that much. Still, I had to have a powerbank near me just in case. In the end the phone managed to log for the whole day without me needing to recharge it. If you have battery drain issues, you can refer to the FAQ of the GPSLogger app for some tips and tricks, like lowering the frequency of logging, but also keep in mind that spatial locations will give out approximate data anyway.

After a day out in the open where you take your photos, and at the same time your phone happily tags your location, you end up back home with a number of images in your camera and a GPS GPX file on your phone. Now all you need is a PC that will combine what you have.

I use exiftool on Ubuntu or WSL for the purpose of combining them. The easiest way to get it is installing it via APT. The package name is libimage-exiftool-perl:

sudo apt install libimage-exiftool-perl

I make sure that I place all the images from the camera and the gpx file in the same folder and it's really easy from there:

cd image_folder
exiftool -geotag gps.gpx *.jpg

That's really all there is to it. The tool uses the gpx to tag your images with the locations provided from it. Because the locations are approximate at best, for the purposes of checking out the images, it's good enough.

Now with the set of images that are geotagged, you can reference the script from one of the previous posts and make a KML for you to check the things on Google Earth.

Happy tagging!

Displaying Image Contents in Google Earth with Machine Learning Keras Library in Python

This is the third post in the series about developing a script that uses your photos to create a workable KML that shows where the photos were taken on Google Earth. The first one talked about the script basics and the second one introduced reverse geocoding.

The generated KML still has a problem that the name of the photo is just a file path for it.


/media/images/geotag-vanilla.png

I wanted to rectify that since it looks ugly on Google Earth. To do that, I needed to know what was in each image. The process where you have half a thousand photos and need to rename them properly can be tedious. A Machine Learning algorithm is better suited for the task. The solution is called image classification and in the case I'm going to describe, it uses convolutional neural network on a pre-trained dataset from ImageNet to make things simple. In the background it does operations on an array of values to figure out what's in an image, which is an array of pixels in itself. Brandon Rohrer explains how convolutional neural networks work if you want to learn more about it.

I attended an information science university. Parts of my curriculum were focused on machine learning so the things that today's industry considers hot are pretty much more of the same, just with more resources.

Now, I cannot have what Google has at its disposal in terms of the dataset and I don't want the script to be overly complex for an ordinary user to use. The usual process is making a tagged dataset to train your model, but I wanted to have a pre-trained model so people didn't need to think about it, there was no need for fine tuning and they could get it out quick and dirty. Fortunately, there's a way today.

Enter Keras. Keras is a machine learning library written in Python. It uses several back-ends, but by default it relies on Google's own TensorFlow library. Installation is simple as it just relies on installing the tensorflow and keras libraries on your system:

pip install tensorflow keras

You don't have to do this because it's covered in the script requirements. Chances are it will work out of the box, and that's what I'm aiming for, but if you want to leverage your hardware resources, introduce GPU support if you have an Nvidia card, or Intel optimization for TensorFlow on Linux systems. You can check out the instructions for installing those yourself. What it usually involves is installing the tensorflow-gpu package or wheels with pip from Intel themselves. The instructions go beyond the scope of this article, which simply aims to provide the means for quick tagging of photos where the speed is not that critical and the user usually has modest resource capabilities.

Keras already has an access to the pre-trained models and the first time you're running an evaluation on the image, it downloads the model and puts it in a hidden keras directory in your home. This script uses ResNet50 application, whose model is around 100 MB and is trained on the ImageNet dataset.

When it classifies each image, the results are displayed as a certainty estimate and a guess from the algorithm, ordered from the best to worst. What happens then is that the first two estimates are taken together and separated by a "/" character. This is what ends up as the name on the placemark.


/media/images/geotag-ml.png

For the moment I am happy with the resulting script. Reverse geocoding and machine learning have shaped it up nicely.

The image classification results are not going to be perfect, but it should definitely save you quite a lot of time having the image names prepopulated with terms. You can then error correct manually those that you don't find accurate if you want.

Implementing the ResNet50 Keras application was very simple in the end and is good enough without fine-tuning.

Then again... The categories could be automatically translated as well. So I included TextBlob and powered up the automatic translation. You just run:

python geotag-gallery.py --folder=/absolute/path/to/the/image/folder/ --language=hr

The language parameter is optional. It will default to English if you don't put in anything.


/media/images/geotag-mt.png

Beware, though. TextBlob is not a robust solution since it's using a public facing 3rd party API and you might experience HTTP error 503 depending on Google's whims so you might be better off not using that feature since it's not guaranteed to work and is experimental at best.

So there you have it. Like I said before, you can download it from the repository. Pull requests are always welcome.