Emojis are everywhere. From Twitter to Facebook Chat, they’ve grown to become Oxford’s 2015 Word of the Year and even featured in a horrendous movie. But what about outside SMS and instant messaging? What about using emojis inside code comments or even git commit messages? Let’s find out how we can make the best use of these funny little pictures.
Contrary to what people may think, emojis have been around for quite some time. The first emoji dates back to 1999, and was created by Shigetaka Kurita, a Japanese telecommunication planner for NTT Docomo. At first used solely in Japan, it took those little pictures ten years for some of them to be added to the Unicode character space. Thus, in October 2010 the Unicode Standard 6.0 got released, and with it 722 emojis. They do not live in their own dedicated blocks though and are spread around the Unicode tables. It took years for multiple engineers working at Google and Apple to convince the Unicode Technical Committee to add them. Now emojis are a part of everybody’s life.
There are even some quirks and fun little facts about these tiny pictures. For example: emojis can vary from one platform to another. Because of that, the calendar emoji is represented always showing July 17 on Apple products (that date representing the announcement of iCal back in 2002). This led people to “wrongfully” declare July 17 World Emoji Day.
Emojis are also represented differently across platforms, and can be interpreted slighly differently. Take for instance the
astonished face emoji. The first one is Apple’s interpretation, the second one is Samsung’s.
Apple’s take on this feeling feels a bit more tamed than Samsung’s, don’t you think?
Other times, it can be the contrary. In this example, Samsung’s interpretation of the
pouting face feels less “angry” than Twitter’s.
But enough with the history, let’s get down to coding.
Github has popularized emoji support inside their ecosystem in a blog post from 2012 thanks to their now famous “
:” shortcut. So now, say you want to use the
fox face emoji 🦊 somewhere in Github (a commit message, an issue or a gist), you can simply use
:fox_face: instead and it will automatically be interpreted by Github.
Using shortcuts is an elegant solution to circumvent emojis not being interpreted. You don’t take the risk of breaking something and even if they’re not (or badly) rendered, the messages are still readable.
Emojis can also add a lot of clarity to commit messages. Compare these two sequences:
- Fix editing user not being saved to the database - Cleanup code - Add the ability to edit a user - Fix bad function callback on API request
- 🐛 Fix editing user not being saved to the database - 📝 Cleanup code - ✨ Add the ability to edit a user - 🐛 Fix bad function callback on API request
You can immediately see where bugs were fixed and where new features were added.
On a platform that doesn’t support emojis, this would be read as:
- :bug: Fix editing user not being saved to the database - :memo: Cleanup code - :sparkles: Add the ability to edit a user - :bug: Fix bad function callback on API request
Definitely not as fun, but still perfectly readable.
The tech industry as a whole appropriated these shortcuts and went far beyond simple emojis. Sure it’s nice to use 🐛 to talk about fixing a bug, but try using
:trollface: in Slack or Redmine. Boom, you’re the new cool kid on the block. Don’t use it too often though, you don’t want to be that guy.
My advice: Don’t hesitate to use emojis in git commits, but do prefer short-codes. I would also suggest not to go overboard with it and stick to a list of a few ones to denote the major actions (bugfix, new feature, styling, code cleanup, etc..).
If you’re not sure were to start or want to suggest a guideline for your team, I highly recommend Carlos Cuesta’s Gitmoji. It even comes with a nifty CLI (simply called
gitmoji-cli) which will help you write your commit messages through an interactive interface. Gitmoji is even used in Atom’s contribution guideline.
"🐼".length // returns 2 "🇨🇦".length // returns 4
Don’t forget emojis can be connected (kind of the same way Fira Code gets you those sexy sexy ligatures). That’s how you can now get skin color modifiers (called
EMOJI MODIFIER FITZPATRICK TYPE-1,
"👨👩👧".length // returns 5
Why 5? Because not only do you get the length of each emoji that symbol is made of, but it also uses two
ZWJ (Zero Width Joiner) characters as “glue”. You can even see it in action: copy/paste that emoji inside VS Code for instance, and it’ll take you five “arrow key” strokes to go through it.
My advice: Do not to use emoji in code logic. Plain and simple. But you can still use them in your views. Web browsers have amazing emoji display capabilities, and know how to fallback to a font that will display your “thump up” icon. But watch out and be careful when using an emoji short-code interpreter in those views, especially if you happen to display code blocks on your website. It could trick you, interpreting
hⓂ️️s, thus making the code block useless.
So what about code comments? Emojis everywhere! As far as I know, you are not susceptible to break anything because of emojis in comments. Modern code editors (Atom, VS Code, Sublime, Intellij…) have amazing emoji support. They even can be pretty useful to make something stand out.
/** * WARNING: Do NOT change this file. */
/* * 🛑 WARNING: Do NOT change this file. */
Emojis are a double edged sword. They allow us to express complicated feelings in a quick and fun way. They are the extension of the emoticons we used profusely back in the IRC glory days. They can be used as decorators, adding feeling to an otherwise plain sentence. They also can be used as markers to make something stand out, and even as a complete communication tool when used on their own.
However, since they’re not designed and interpreted uniformly across platforms, they can be the source of misunderstandings. Communication relies on the stability of its mean of propagation. If a symbol is changed between the sender and the receiver, the message is not the same. As characters, they also need to be put in context. That’s why some of them had to be changed. For instance the
:gun: emoji 🔫 which used to be represented by a real gun, is now a water pistol.
When it comes to code though, I’m all in favor of using emojis. Not in the code itself, as I’ve stated, but rather in comments and commit messages. They embelish the message they’re attached to, for they are generally mainly used as pointers. And with the help of short-codes, you can use them without the fear of breaking something.
If you want to know more about emojis, you should check out Monica Dinculescu‘s work, and especially her talks.
I also recommend Angela Guzman’s post on the making of Apple’s emoji. Angela writes how she and her mentor Raymond designed over 500 emojis during her internship back in 2008. This changed her life, and her work is now in the hands of millions of people.
So go ahead and emoji away, you’ll improve readability and break away from the monotony of a dull screen filled with code. 😄
Cover image: Nong Vang
The protection of digital assets is a multi-million dollar industry. Whether we’re talking about military, financial or scientific data, each industry has to be prepared in the event of a loss, and plan for security. They often roll out extreme measures, going as far as having their own (and doubled) dedicated electrical power lines. But what about safeguarding your friend’s latest BBQ party pictures? Or your little one’s first steps video? Here’s how I’ve learned my lesson from a tragic system failure and what my current setup looks like now.
Back in 2008, I made myself a custom NAS (Network Attached Storage) using some old computer parts, a bunch of 500 GB hard drives and a copy of FreeNAS. The OS ran off a nifty 512 MB IDE flash based drive, and the data array was configured to use RAID5. That meant that if a drive was to be damaged, I could always put a new drive in the array and the data would rebuild itself. Note the conditional tense. That’s because it worked flawlessly until we moved in 2010. And we stored the NAS in a box next to a speaker with a huge magnet for a month. And two drives failed.
I spent weeks trying to figure out a solution on how to rebuild the data I had lost. But after some time I had to face the reality of things. It was in vain. Years of family photos and videos, an entire MP3 collection, all my video games… It was all lost, forever. My girlfriend was in tears and my “geek pride” after spending all that time planning and building this whole system took a serious hit. I was using hot data as a storage, and at the time I had no backup strategy. Of course I had recovery options, hence the RAID5, but I was not prepared for such a catastrophic failure. And when it comes to computer security, you have to prepare for the worse.
Years later, I learned my lesson. So here is how I handle my digital life now.
My current setup is mainly built around two things: a new NAS, that I bought and not built, and a backup software that automates how data is handled.
The NAS I’m using now is a Synology DiskStation DS214se. It’s a very simple machine, running on a 800 MHz dual core CPU, with 256 MB of RAM and two hard drive bays. I put in there two 2 TB Western Digital Green hard drives, and configured a single array to be run in RAID1: everything that’s on one drive gets mirrored to the second one. That means that I lose half of the hypothetical storage space but if one drive fails, I can change it and the data will automatically rebuild itself.
The NAS sits on top of an APC Uninterrupted Power Supply. If power is lost in my apartment, the NAS keeps running and I can manually (and safely) shut it down either by using its physical power button (which sends a power off command) or even my phone (my router is also plugged in to the UPS, so even without power I still have internet and network access for a few minutes).
My main backup strategy is handled by an amazing software called SyncBack Free. This software allows me to set up various backup scenarios, called profiles. The main profile is a physical backup to an external hard drive. When I bought the Synology NAS, I got a third 2 TB drive that is now used as a backup. This is my first failsafe. This is what lacked in my previous setup. Once the backup task is done, that drive is stored offline and off-site, so it doesn’t have to suffer from electrical malfunctions. And even in the event of a fire or flood at my place, my data is safe.
SyncBack then runs two more jobs. Amongst all the data I’ve lost with that old setup, the loss of family photos were the hardest to cope with. One can always replace music or movies they used to love, as there’s a never-ending stream of entertainment to consume. But memories do fade away, and are impossible to retrieve. So I’ve decided to add another redundancy layer to my backup strategy when it comes to photos and store them online, in my Google Drive. SyncBack compares the content of the NAS folder and my Google Drive, and updates the later with the former before performing a Cyclic Redundancy Check of each file to see if they are the same on both sides.
I should note that I could use two different apps on the NAS and have it handle these two backups automatically: USB Copy and Hyper Backup. After trying out both apps in different scenarios, I’ve decided not to use them as they either store data in a proprietary format (Hyper Backup) or add a bunch of
._ prepended metadata files to my existing directories (USB Copy). I like the fact that if I ever need to retrieve my files outside of Synology’s ecosystem, I still can use a good old
cp command to get my files back.
So my data is stored on a RAID1 array, and on an offline hard drive. And the photos are backed up online on my Google Drive. I could have stopped there but I thought that it was not enough. Thanks to my Amazon Prime subscription, I can upload an infinity number of photos on their Amazon Drive cloud service and it won’t affect my otherwise limited quota. So hey, let’s take this opportunity! Another SyncBack profile backs up the content of my Photos directory to Amazon’s servers. I like the fact that my data is stored on two different storage providers. Google and Amazon each have their own infrastructure, so in the event of a failure of astronomical proportions at either one of these places, I may still be safe.
But why stop there? My photos are stored on four different locations now (The NAS, the external hard drive, Google Drive and Amazon Drive). But what about the rest? My music, my documents, my family videos? Well of course they’re on the NAS and the external hard drive, but I figured I needed another failsafe. Because so far my backup strategy relies on what could constitute a single point of failure: SyncBack. If the software behaves badly or one of my backup profile is not properly configured, I may end up with nothing but a bad save on various locations. I also don’t have access to the external hard drive that easily, so if I need to do a backup at any given time, I need to prepare the operation at least a day in advance.
That’s why I took a subscription to Synology C2. It is a fully integrated service that runs natively on DSM (DiskStation Manager: Synology’s own operating system) and allows me to back up the whole NAS (minus my movies and TV shows, these are not important) to Synology’s servers. It uses AES-256 to locally encrypt the data before sending it on the network. I’ve set it up so it does an automated backup every first day of the week, and then do an integrity check two days later.
I also considered Online’s C14, as they’re really cheap and you can send files over (S)FTP but unfortunately they do not support Synology.
So this is what my current setup looks like now:
Each file is physically stored on up to 6 different location, with various levels of failsafe measures.
Is this setup perfect? Of course not. First and foremost, it lacks automation. I still have to start each backup task (except for the C2 one) manually, and it is prone to error. I’m working with live data so the array is constantly changing, but this is a backup, not a long term cold storage. And the external hard drive I’m using has to be transported and manipulated, so that’s another weak point in the system.
One thing I’ll probably change soon is the model of the hard drives I’m using. WD Green are “fine” but they are not made for being used in a NAS. So I think I’ll switch them for either WD Red or Seagate Ironwolf line, and probably take the opportunity to do a slight storage upgrade to 3 or 4 TB.
All in all, the main problem with backup strategies is that they’re never perfect. Just look at what happened at GitLab a few months ago, or even the catastrophic failure that brought OVH to its knees for hours.
One cannot be fully prepared against data loss. Still, I can say that I feel somewhat confident with this strategy, and I’ve tried to think about every scenario (even solar flares, but they’re a whole another animal). We’ll see how and where my data sits in a few years.
Cover image: Patrick Gradys
I’ve been a fervent advocate of password managers for years. You can ask pretty much anyone in my family or amongst my friends, there was a time where I had to ask the question “By the way, what do you use to store your passwords?”. This was usually followed by a 20 minutes speech about how unsecure their digital life was and a desperate attempt at convincing them that MUST use a password manager. That was, of course, I didn’t faint learning that my friend or relative is using the same 7 letters password for absolutely EVERYTHING, “oh and it’s hunter2, I don’t mind telling you, I have nothing to hide”. Yes, that happened.
For the past four years or so, my password manager of choice has been LastPass. I’ve been a happy Premium member ever since, and even though it’s had a few hiccups (in 2015 and in 2017), they’ve been pretty transparent about the situation each time and I chose to keep trusting them. I used to log into my account using two-factor authentication: my Master Password and a Yubikey, and it’s been working flawlessly for years. However, after a long consideration, I’ve recently chosen to say goodbye to LastPass and continue my journey with another solution. Here’s why.
You see, the problem with computer security is that you always have to find the proper balance between how safe you need to be and the convenience of day to day usage. My setup was somewhat secure, but the convenience wasn’t there anymore. I usually have to log into my LastPass account several times a day, as I’m using several web browsers, with sometimes several profiles per browser. Therefor, since LastPass automatically logs out whenever a new session is opened somewhere else, I also had to use my Yubikey multiple times per day, even per hour. In the end it was starting to become a burden. Besides LastPass has made some changes to its UI, and is focusing more and more on being as easy to use as possible, and in a way as “opaque” as possible. While I’m all for it, as it draws more and more people to being safer on the web, it can result in really annoying results under the hood. Often, the browser extension does not catch the proper fields for the username/password combo. Or when I’d use the password generator, it didn’t necessarily register it properly. It has some difficulties dealing with websites which make use of AJAX requests. It often saves uselessly complicated URLs that are generated during the signup process, making a lot of its database entries dirty. If you’re not tech savvy, it’s still an amazing product, and it’s way better than using nothing or that old post-it note that’s on your computer screen. But for me, what was once a really good tool to use had become a burden. It was time for something new, or in this case something old.
KeePass Password Safe has been around for 13 years. The interface shows its age, but its ease of use and the security features it offers has been proven multiple times. Besides the usual username/password combo, you can specify custom fields, and even attach files to your password database. Speaking of which, KeePass uses AES, TwoFish or ChaCha20 as cipher for the database, and the passwords it contains are protected in memory. It does pretty much everything LastPass does (or the other way around depending on your point of view), albeit with a less fancy UI and a less “automated” process.
The particular flavor of KeePass I’m using is called KeePassXC. It’s a community-driven fork of the now defunct KeePassX, which was aimed at being a multi-platform version of KeePass. So it works perfectly on Windows, MacOS and Linux. Exporting the database from LastPass and importing it into KeePassXC demands a bit of work, as you have to use a cumbersome CSV file (which you must not forget to destroy!) LastPass export tool provides you with. LastPass’ “Secure Notes” are stored in an XML-like format, and you’ll probably have to rewrite them manually inside KeePassXC after the import. But after that tedious task is done, using KeePassXC is really easy, and it works flawlessly. Whenever you need to retreive a password, simply switch to the app and hit Cmd+F, type in the first letters of the entry you’re looking for and a simple Cmd+C / Cmd+V does the trick. It litterally uses less than five seconds. If you’re using a browser extension, it can even automatically fetch your credentials and is therefor as fast as LastPass, if not faster.
The setup I’m using now is as follows. The password database file is stored on my Google Drive. This allows me to have a silent synchronization between my different devices, and provide me with an online backup. I’m using both my Master Password (with a 100+ bits entropy) and a key file (which is stored locally, not on Google Drive) to unlock the database. It can communicate automatically with my various browsers thanks to two extensions: chromeIPass and PassIFox. The database file and the key file are backed up on a Network Attached Storage drive, and two offline copies.
So I’m finally happy again. The main “problem” I have with KeePassXC is its UI. In these days of Flat UI and Material Design guidelines, the software feels really dated. It also lacks some basic features such as showing which fields can be displayed in the entries columns, and my biggest gripe: custom fields do not show straight away when you search for a specific entry. Let’s say for instance you create an entry for a credit card. It contains several custom fields such as the card number, its expiration date and the CCV. KeePassXC won’t show those fields unless you go and edit the entry to display the custom fields, or you right click and select one of the “copy attributes” options. It would have been much better if each entry could be displayed in its entirety on a single pane, minus the protected fields of course.
I should give an honorable mention to KeeWeb, a web app (available as an Electron desktop app as well), which tries to rejuvenate KeePass in that regard. But it poses even more security problems than KeePass does. I won’t go into too much details in this post, but the flaws are easy to guess.
All in all, I feel much better using KeePassXC than I was using LastPass. Of course, this solution is not the most secure there is, especially by storing the database file on the cloud. I could fix this issue by using a self-hosted cloud service like OwnCloud or Bitorrent Sync. But once again, the balance between security and day to day usage would have been lost. I’d love for KeePassXC to support TOTP so that I could use again my Yubikey and get rid of the keyfile, but so far I feel confident in that solution.
By the way, what do you use to store your passwords?
Cover image: Victoriano Izquierdo