Planet CDOT

December 07, 2016

Laily Ajellu

US Accessibility Laws and The Classroom

Advanced Accessibility Series

Welcome to the first post in the Advanced Accessibility Series adapted from a presentation given at the University of Vermont.

Not just software

Accessibility applies to three levels of technology:
  1. Hardware
  2. Software (Browser, operating system, etc)
  3. Webpage (HTML) code
  4. Code for connection to the network
A piece of technology is only considered accessible when all 4 levels are accessible. The example below is primarily a hardware level issue,

Cell phones Accessibility Issues in the Classroom

If a student is only given the option to use their cell phones to do course related work, this cause a classroom accessibility issue because:
  • Small keyboard and screen size makes it difficult for users with motor disabilities to use, especially if the input is timed
  • The bright backlight of a cell phone may cause some users to not be able to see the colour on the screen.

Legal Considerations

Before undertaking any project (web development or otherwise), it's important to know the accessibility laws that apply to your project so that you don't run into costly issues down the road. There are two legal sections in US law that relate to accessibility:
  1. Section 504
    • About accommodating accessibility for old systems that weren’t designed with it
    • Eg. A visually disabled student asks for a digital copy of a handout from class
  2. Section 508
    • About moving the responsibility for accessibility to individual company's rather than the government. Now each company ( of > 49 employees) must implement inclusive design.
    • Eg. All new handout are also available digitally to all students
To defend these rights, the National Federation of the Blind (NFB) pursues legal action in the US against companies that don’t comply, even Google.


by Laily Ajellu ( at December 07, 2016 06:27 AM

December 06, 2016

Matt Welke

Social Media Working

Today we fixed the bug with gathering social media information (finding out whether or not a visit to the article was from a share). Long story short, we tried to do things the simple way, hooking into the JavaScript events the AddThis developers had created, but that wouldn’t work. So instead we re-implemented it. We made our own JavaScript events (just like we did for our own info gathering) and hooked into those. It was very simple to get this info. If a visit came from a share, there was a “service” appended to the URL in the form of an anchor tag (#). This was either “facebook”, “reddit” etc. So we just looked for this in the URL.

On the user guide side of things, I refined it further. I improved how it looked and made it more navigable with an anchor link table of contents. It includes more information about our filtering and CSV output system now that my team mate has finished implementing that functionality. Another feature he’s working on which will be documented soon is live feedback on long-running queries. As it was, the user would click the run button and then it would spin until the query was done, and either display the display components (graph etc) or display a button to download the generated CSV. Some of these queries take a minute or two. So now the user clicks the play button and gets a series of steps displayed as they’re completed. The steps represent the parts of the query and the packing of the CSV file, etc.

by Matt at December 06, 2016 10:57 PM

December 05, 2016

Matt Welke

Testing Social Media Actions

Today I finished up the second version of the User Guide. It actually doesn’t look very different from before, but now the content of the guide is separate from the markup to present it. So if we decide to change how it’s displayed, instead of changing 100 lines of code, we change one. Yay!

The social media actions has been the primary focus. We tested today and we’re now able to track when they share an article and follow as a result of viewing the article. However, a huge part of why we were excited to implement this is because the AddThis API allows us to get a “clickback”. One of the things logged when someone visits the site after clicking a link to it is whether or not that visit came from a share. We can actually know how successful the sharing is. This sounds incredibly useful, but unfortunately we’re having issues getting this working. We’re still debugging, and hopefully we can figure out why the event from the AddThis documentation isn’t firing. Perhaps we’ll talk to their support if we can’t debug it. Maybe there’s an issue we can be a part of fixing.

by Matt at December 05, 2016 11:23 PM

December 03, 2016

Matt Welke

Adding Social Media Actions

Today we added more code to our setup (both Push API and web browser code) to catch whether or not the users are sharing articles on social media. We were already looking at whether or not they were clicking on links, but we wanted to grab this info if we could because of how valuable it is.

Our usual approach to observing user behavior on the web pages is to look for CSS classes that are unique to the HTML elements the users interact with. We can write JavaScript that sets event handlers on those elements so that messages get sent to our Push API when they interact with them. Luckily for us, the social media buttons do have those classes. But wait, there’s more!

The buttons are from a social media service called AddThis, who provide more functionality than just CSS classes. They’ve already designed an API with event handling, using the same unobtrusive JavaScript techniques we used for designing our client code. So we don’t even have to write our own event listeners targeting elements with CSS classes. We just tap into the events they’ve already prepared as part of using their API. For example, one of the events is called “” and it fires when the user uses the buttons placed on the page by AddThis to share the page (ex. to Reddit, Facebook, etc). That saves us some work and also makes our work more future-proof because we’re tapping into the way they design their system which likely won’t change. It’s official, it’s supported. It’s not just something two coop students at Seneca came up with as they learned how to build things.

We made the required changes but due to the complexity of the changes and my unfamiliarity with testing this kind of change, we decided to wait until Monday morning to test it, when we’ll have more time to observe it and correct any issues.

The rest of the day I also spent time preparing for my second presentation. I’ll be giving an introduction to Elasticsearch and how to use it an MVC style web app. My team mate will be giving an introduction to machine learning and how search engines like Elasticsearch rank results, using mathematical techniques like term frequency-inverse document frequency.

Oh yeah and I learned that Google Docs produces much nicer-looking presentations than LibreOffice Impress.


So far so good with our scale up to being on all the article sections, by the way. Our database isn’t breaking a sweat:


by Matt at December 03, 2016 03:13 AM

December 01, 2016

Matt Welke

Monitoring Database Performance and Adding More Actions

Today we went live on more areas of the site. We’re going to slowly ramp up and make sure our MongoDB database can handle receiving all the data. We’re at a point that we’re receiving about 50-100 hits per hour (where a hit is a visitor visiting one web page), and it’s coping fine. We expect it to be able to cope with much, much more, thanks to the design decisions we made (like using WebSockets instead of AJAX to send the user behavior to our server).

I made a performance monitoring script that runs every 15 minutes on my work computer, running a simple test query that scans every hit in the hit collection. I want to know how many milliseconds it takes to complete that query, and I want to chart that as the number of hits in the collection increases, so that we’ll know the performance of our setup as a function of the number of hits it needs to scan. So far, it’s taking about 2-4 ms per query, and I suspect most of that is the time it takes to go to AWS and back. I suspect that only a tiny fraction of that is the time it takes to execute the MongoDB query. So we’ll have to see how that changes as time goes on. Right now the charts I can make in LibreOffice from the data I have look pretty, but they’re pretty meaningless:


I want to know how well this thing holds up when there’s millions of hits in the collection. And I want to make more thorough tests that see how it performs when you need to do complicated queries on it like we need to do, that involve storing things in memory on the web and then going *back* to the MongoDB server for another query before it’s done. We know from experience that those take far longer than 4 ms, and I think tools like this script I created will help us understand how the performance of our database decreases as it scales.

I also began working on new information we need to gather from the user behavior during their visit. We need to know whether or not they click on the social media sharing buttons to get a better idea of how interested they are in the article they’re reading. Right now we track things like clicking on links, but we haven’t gone deeper to the point of understanding the links and buttons they’re clicking on. I started to analyze the markup produced by their CMS system to see if those buttons had useful CSS classes (easy for us to target) and they did! So it shouldn’t be too hard to add that functionality to what we’re already collecting.

by Matt at December 01, 2016 10:47 PM

Andrew Smith

Social media dinosaurs

That’s me. To date this blog has been the complete extent of my presence on anything resembling social media. I don’t even send text messages because I think they’re dumb.

But I’m starting a new blog (very soon, hopefully next week) which I want to be noticed. And I figured the only way it will get noticed is if I get onto social media. At least I hope so. I am not planning to dive into the universe of stupidity which is Twitter, Facebook, Reddit, etc. I’ll just use some of those platforms to post links, and allow people who are already on those platforms to use them to discuss the things I will be bringing up.

I don’t know if I’ll be able to keep up. I don’t know whether the message will be dilluted or amplified. Will be interesting to see how it goes.

by Andrew Smith at December 01, 2016 07:41 AM is big again is now in its fourth incarnation! It started as a massive old PII, which I eventually replaced with a tiny Koolu for 400$. I was excited about that because the new server used less than 10W of power, had no fans, but did everything I wanted.

Later I upgraded that to an Intel Atom based machine. I don’t remember what it cost but probably about the same. It had more RAM (2GB!) and a faster CPU but it had one fan and used slightly more power.

Recently I upgraded to Slackware 14.2, which came with the newest versions of Apache and MariaDB. Those chewed up my RAM in hours. I upgraded the ram to 3GB (the maximum this machine is capable of using, despite the advertised 4GB) but that also wasn’t enough. The script I wrote to catch running out of memory problems emailed me almost daily.

So I went shopping and this time instead of looking for the lowest possible power I just looked for a more reasonable balance. I ended up getting a 65W AMD A10 on a mini-ITX board with 16GB (yeah!) of 2400MHz RAM. I also bought a mini-ITX case, the size of which shocked me.. I guess I should have expected it when I saw the box (it’s the volume of a full tower) but whatever, I just had to move the UPS out of the shelf where the server lived. Here is the old one and new one next to each other:


The whole thing cost me 560$ with tax, all new except the power supply. Weird how I went from massive back to massive! The new machine has a CPU fan (obviously) but that really is very quiet as advertised. The SSD drive doesn’t make noise. The case came with one big fan and one massive fan (and room for two regular fans), but I only had one fan plug on the motherboard so that doesn’t make too much noise. And there’s the power supply fan. Sucks to have so many moving parts, but sadly that’s the price you have to pay for performance.

The upgrade wasn’t a waste of money as I worried it might be. My website now flies, I almost can’t believe it loads this fast over the internet. My bugzilla is actually usable now. I had no idea I had a bottleneck in my CPU but apparently I did. And the lot of RAM should last for a few years. This experience was the complete opposite of my workstation upgrade where I saw no improvement in performance despite doubling the CPU and RAM.

by Andrew Smith at December 01, 2016 06:59 AM

November 30, 2016

Matt Welke

Learning React Frameworks

Today I started work late since I was feeling ill in the morning. That meant that my team mate had already started working on what I planned to do today, which was creating a new, cleaner-looking version of the user guide. I took the time today to learn more about React and frameworks that use it. This was something I’ve been meaning to do lately anyways when time allowed. Most people right now don’t create React apps, they create apps with the Flux or Redux frameworks, which use React as their “view” layer.

I found some excellent resources for learning these concepts in this YouTube tutorial series and this blog post.

React enforces a functional style of programming, which is great. It helps you avoid issues with some state existing here, other state existing there, and now you have to make sure the two places don’t change each other’s state in ways that screws up your app. You do this with React components by making the top-level component in a tree of components hold the state that gets accessed by those lower-level components. You hoist the state as high as it needs to go. Then, the values in that state flow down (usually as props) when they’re changed. The child components get informed of that new state (through their props) and they may or may not re-render. This is “unidirectional data flow”.

Frameworks like Flux and Redux take this concept even further. The only place any state is allowed to exist is in a “store”. When the store is changed, the components react to that change (using a sort of pubsub/observer pattern). The components reaction fire “actions”, which then may end up changing the store’s data. And that may cause components to react to that change. If reading this makes you picture a circle in your head, you’ve got the right idea.

React is data flowing in a line. Flux is actions flowing in a circle.

At least that’s what I’ve gathered so far… I have more learning to do to learn how this benefits our apps as they scale. We probably won’t end up using any of these frameworks for our Visual Analysis Tool now that we’ve started building it without them, but I think it’s still a good idea for me to get familiar with them.

by Matt at November 30, 2016 10:41 PM

November 29, 2016

Matt Welke

Completed User Guide

Today I finished up the part of the user guide that we could complete. That’s describing all of the document types we have in our MongoDB database so that those who use the Visual Analysis Tool (VAT) will understand what they can do for querying. I have a little bit of explanation about how to run and save the queries in there, but it’s hard to go into much detail with that part right now, because we’re changing the way queries are done with our new “filter” system. This lets the VAT users create their own queries to get whatever data they want. We’ll probably end up removing the pre-built queries we made and replacing them with this filtering system.

I’m happy that this part of the user guide is ready to push onto AWS as we continue to demo our work to the staff. But I can tell right now, like most GUIs I create, it’s incredibly ugly for its first version. I began it by just including another react-bootstrap modal because I thought it would be a few paragraphs. I didn’t realize it would need so much documentation. I will definitely have to refactor it out into its own full web page.

When I do that, I’m going to create new React components for the user guide, so that I can save time instead of manually coding up HTML like I did for version 1. It’ll allow the user guide to be a bit dynamic too if we want. For example, for the examples I have listed for the properties of the document types, I could do a query as the user guide is loaded to examine the database and return a set of distinct values already there.

Plus, if I create React components instead of pure HTML, it separates the content from its HTML used to display it. I can choose to go from a table and a set of rows to a div and a set of <<insert fancy bootstap thing here>>s. It will keep it way more maintainable in the future. There would only be one spot to change the HTML used, and the content itself is still passed in with the React props, just as always.

by Matt at November 29, 2016 10:55 PM

November 28, 2016

Matt Welke

Making a User Guide

Right now I’m working on creating a user guide. This is an area of the Visual Analysis Tool (VAT) that the user can click on to learn how it works. This is going to be a big task so we figured we would start it now. Especially since we’re doing more demos with the staff to get their feedback. It would be nice if they knew what the VAT has to offer. This word is a bit tedious, so I don’t have much to say about it. Today I added documentation explaining 4 of the document types. I have about 10 left to do.

Right now, it’s just displaying a large modal with this massive amount of information. We may have to refactor this out and create a nice interface dedicated to the user guide.

by Matt at November 28, 2016 10:04 PM

Laily Ajellu

Accessible Complex Tables

Simple Tables

Simple tables are generally easy to use with screen readers because table structure is predictable and repetitive. Just remember to use:

  1. aria-labelledby
  2. aria-describedby
  3. role
(If the above seems foreign to you, check out this post: Intro to ARIA

Here's an example of one:

Complex Table

Complex tables are tables with:
  1. Nested rows or columns
  2. Where a heading isn’t at the top of the column or to the left-most of a row
Visually impaired users use screen readers to understand table structure as well as the table values. When going through a complex table it can be difficult to follow along with the structure. For the screen reader to correctly interpret your intended meaning, follow these tips:
  • Give all th (table header) tags:
    • id attribute - unique identifier
    • scope attribute - values: col or row
  • Give all td (table data) tags a headers attribute with space-separated id’s of its related headers
  • Use the abbr tag (abbreviation), to define abbreviations

  1. Aria Roles
  2. Marking up Tables Properly - University of Washington
  3. Web Accessible Tables - University of Washington

by Laily Ajellu ( at November 28, 2016 02:29 AM

November 26, 2016

Matt Welke

Unobtrusive JavaScript

Friday I learned a bit more about some frameworks work under the hood using a pattern called “unobtrusive JavaScript”. Some backstory:

I wanted to add a pop-up that would appear when users tried to delete a query or a filter they were building in our VAT (Visual Analysis Tool). The popup would make them confirm they really wanted to delete it. Bootstrap’s “modal” would work perfect for this. That’s the thing that makes the whole screen go dark and then a panel fades in, floating in the center of the screen. The panel has some text and usually two buttons, one to cancel, and one to confirm. I figured because we were already using Bootstrap’s CSS classes to style the VAT, using things like the modal would be trivial. Yup. It wasn’t.

The Bootstrap modal relies on JavaScript to pull off its functionality. It’s wired up to the DOM using the Bootstrap JavaScript code they provide. And because our VAT uses React, we can’t safely shove those two JavaScript frameworks together without undefined behavior. My modal where I simply added the classes to the HTML elements didn’t work. The solution to my problem was to use bootstrap-react, a library where they re-created the Bootstrap components that used JavaScript (and jQuery) to interact with the DOM as React components, which interact with React’s virtual DOM the React way. It’s safe, and, in the context of a React app, it Just Works™.

So what exactly is unobtrusive JavaScript? Long story short, it’s when JavaScript is used to manipulate the DOM and define events without the HTML element having to be explicitly linked to the JavaScript. It’s unobtrusive because from the perspective of the HTML coder, there isn’t even any JavaScript involved. Here’s an example with a button that changes its text to “hello” when it’s clicked:

Without unobtrusive JavaScript, we define a JavaScript function that changes the HTML of the element it was fired on.

function changeToHello(event) {
// “” is the HTML element that triggered this function = ‘Hello’;

Then, we use the onclick event handler of the button to link it to our function, which is now an event handler.

<button onclick=”changeToHello(event)”>Click me!</button>

Now with unobtrusive JavaScript, we use a CSS class (or it could be an id, etc).

<button class=”btn-hello”>Click me!</button>

Then, in the JavaScript, search through the DOM for elements with that class, and add event handlers to them. This code goes *after* the body, because the elements must be present on the DOM at the time the code runs, and the elements won’t be on the DOM until after the body.

// No function definition, just run code
for (element of document.getElementsByClassName(‘btn-hello’)) {
element.addEventListener(‘click’, event => { = ‘Hello’;

Looks complicated right? Why not just use the onclick HTML attribute? Because we’ve now made things more loosely coupled. We can choose to make something use this behavior by adding the “btn-hello” class. Want more than one click type behavior? No need to make a master onclick function that calls multiple functions. Just add, for example “btn-hello btn-goodbye” to the classes of the element.

This is exactly how Bootstrap implements modals and tabs and all their fancy things that react to users. You add a CSS class, but that’s not just using a stylesheet. It’s tapping into a huge number of event listeners and handlers that the Bootstrap team has created. They might even change them in the future. But you won’t need to worry about what events to tie to your elements. You just add “btn-hello” and call it a day.

by Matt at November 26, 2016 07:00 PM

November 25, 2016

Kezhong Liang

Building a Centralized AIDE Server on CentOS 6

AIDE (Advanced Intrusion Detection Environment) is a host-based intrusion detection system (HIDS) for checking the integrity of files and directories. It creates a database at the initial time, and then it run periodically to compare the current state with the initial database. If there are any discrepancies in those files such as permissions, ownerships, file size, MAC times, and checksums over the file contents, it will generate a report.

Since the database and binary are stored on the local root filesystem, attackers can easily tamper them if they compromise your system. A good way protecting the database is that stores it on another server which can access to this monitored server and cannot access reversely. We also need to consider another secure issue is the user running AIDE, the non-privileged user is more safer than the root. For the above reasons, I built a Centralized AIDE Server that it sends the binary to the non-privileged account of the monitored servers, run it, receives the reports, and removes the binary and its report.

The following is my test steps:

On the client(
Create a non-privileged account and grant sudo privileges to it
# useradd aideuser
# passwd aideuser
# visudo
aideuser ALL=(ALL) NOPASSWD: /home/aideuser/*/aide, /bin/chmod 644 /home/aideuser/*/aide.newdb

On the Centralized AIDE Server(
Install AIDE package
# yum install aide words -y

Create a non-privileged account
# useradd aideuser
# passwd aideuser

Make the non-privileged account access to its client by ssh without password
# su – aideuser
$ ssh-keygen -t rsa
$ ssh-copy-id -i ~aideuser/.ssh/
$ exit

Setup a tree for AIDE
# mkdir ~aideuser/bin
# mkdir ~aideuser/configs
# mkdir -p ~aideuser/clients/
# cp /usr/share/doc/aide-0.14/contrib/ ~aideuser/bin/
# cp /usr/sbin/aide ~aideuser/bin/aide.CentOS6.8.x86_64
# ln -s ~aideuser/bin/aide.CentOS6.8.x86_64 ~aideuser/clients/
# cp /etc/aide.conf ~aideuser/configs/aide.conf.CentOS6.8.x86_64
# ln -s ~aideuser/configs/aide.conf.CentOS6.8.x86_64 ~aideuser/clients/

Modify the script ~aideuser/bin/
1. Modify the default mail at the line 205
2. Modify the content of line 276 as below:
ssh -t -l $userid $machine "(umask 077 ; cd ${remote_aidedir}; sudo ${remote_aidedir}/aide --init --config=${remote_aidedir}/aide.conf 2>&1 | tee ${remote_aidedir}/initoutput >> /dev/null)"
3. Before the line 286 “scp -q ${userid}@${machine}:${remote_aidedir}/aide.newdb ${clientdir}/${machine}/aide.db_${machine}”, insert the following line:
ssh -t -l $userid $machine "sudo chmod 644 /home/aideuser/*/aide.newdb"
4. Modify the content of line 292 as below:
ssh -t -l $userid $machine "umask 077 && cd ${remote_aidedir} && sudo ${remote_aidedir}/aide --config=${remote_aidedir}/aide.conf 2>&1 | tee ${remote_aidedir}/report >> /dev/null"

Modify the configuration file ~aideuser/configs/aide.conf.CentOS6.8.x86_64
3 @@define DBDIR /var/lib/aide ==> @@define DBDIR .
4 @@define LOGDIR /var/log/aide ==> @@define LOGDIR .
7 database=file:@@{DBDIR}/aide.db.gz ==> database=file:@@{DBDIR}/aide.db
12 database_out=file:@@{DBDIR}/ ==> database_out=file:@@{DBDIR}/aide.newdb
89 /bin NORMAL ==> /bin DIR
90 /sbin NORMAL ==> /sbin DIR
91 /lib NORMAL ==> /lib DIR
92 /lib64 NORMAL ==> /lib64 DIR
94 /usr NORMAL ==> /usr DIR
146 /var/log LOG
147 !/var/log/lastlog

Change the permissions and ownerships
# chown -R aideuser.aideuser ~aideuser/
# chmod 700 ~aideuser/bin/

Remove AIDE package
# yum remove aide -y

Initialize the database for the client
# su – aideuser
$ cd bin
$ ./ -init

Login the client as the root account, backup the ps command to the /tmp directory for recovery, and then copy an any file to replace the ps command. Go back the server, run the command to check:

$ ./ -check

AIDE found differences between database and filesystem!!
Start timestamp: 2016-11-20 14:14:05

Total number of files: 21948
Added files: 0
Removed files: 0
Changed files: 1

Changed files:

changed: /bin/ps

Detailed information about changes:

File: /bin/ps
Inode : 1949793 , 1949792

Make the AIDE run periodically
$ crontab -e
1 * * * * /home/aideuser/bin/ -check ALL > /dev/null 2>&1

Prevent aideuser from login into the system
# usermod -s /sbin/nologin aideuser

If you use the source code of, you may meet the permission problem, because the non-privileged account run aide command without the root privilege.
do_md(): open() for /etc/securetty failed: Permission denied
do_md(): open() for /etc/shadow- failed: Permission denied
do_md(): open() for /etc/cron.deny failed: Permission denied
do_md(): open() for /etc/gshadow- failed: Permission denied
do_md(): open() for /etc/libaudit.conf failed: Permission denied
do_md(): open() for /etc/shadow failed: Permission denied
do_md(): open() for /etc/gshadow failed: Permission denied
do_md(): open() for /etc/sudoers failed: Permission denied
do_md(): open() for /etc/security/opasswd failed: Permission denied
open_dir():Permission denied: /etc/audit

File Integrity Assessment Via SSH

Filed under: Uncategorized

by kezhong at November 25, 2016 07:36 PM

November 24, 2016

Henrique Coelho

Making shell scripts for deployment on AWS

Long gone are the days which deploying a website meant simply uploading .php files via FTP to your favourite web host. Now, if you are using any of those new, fancy technologies such as Node.js, Docker, and Cloud Hosting, you will have to perform several tasks in order to put the new version of your website online. Luckily, these services often allow you to do this via command line - not that it makes things easier at first glance, but it allows us to automate these processes. Our application is built on Node.js, it uses 3 Docker containers (2 APIs and 1 Database), and they are all deployed as a single task on Amazon Web Services (AWS); in this post I am going to describe how our deployment currently done, and what I did to automate it.

1- Logging in

To make any changes on AWS, you first have to log in. For this, we use the command that they provided us:

aws ecr get-login

Run this in the command line and it will give you another command; copy and paste the new command, and you are logged in. Or, if you want to be lazy:

aws ecr get-login | bash -e

2- Building, tagging, and pushing the container repositories

For these tasks, AWS provides you the required commands - when you use their website to upload a new version of the repository, they will tell you to run commands similar to these:

docker build -t <name> . &&
docker tag <name>:latest<name>:latest  &&
docker push<name>:latest

3- Stopping the current tasks

For the new containers to be run, the current tasks will have to be stopped. There is no quick and easy way to do this, as far as I know, so this is what I did:

  • List all the current tasks running on the cluster
    aws ecs list-tasks --cluster default

This will give you a list of tasks in JSON format, like this:

    "taskArns": [
  • Extract the tasks

For extracting the tasks, I piped the output into two seds:

sed '/\([{}].*\|.*taskArns.*\| *]\)/d' | sed 's/ *"\([^"]*\).*/\1/'

This is the result:

  • For every line (task), stop the task with that name

Now we can use the command provided by AWS: aws ecs stop-task. I just used a for-loop to go through every line and stop the task:

while read -r task; do aws ecs stop-task --cluster default --task $task; done

4- Wrapping up

With the basic pieces done, I wrapped them in a shell script:


    aws ecr get-login | bash -e



    echo -e "Ready to deploy the database? (Y/n)"
    read shouldDeploy

    if [ "$shouldDeploy" = "Y" ];then
        echo -e "Deploying the database\n"
        cd ../database &&
        docker build -t <name> . &&
        docker tag <name>:latest<name>:latest  &&
        docker push<name>:latest



    echo -e "Ready to deploy API 1? (Y/n)"
    read shouldDeploy

    if [ "$shouldDeploy" = "Y" ];then
        echo -e "Deploying API 1\n"
        cd ../api1 &&
        docker build -t <name> . &&
        docker tag <name>:latest<name>:latest &&
        docker push<name>:latest



    echo -e "Ready to deploy API 2? (Y/n)"
    read shouldDeploy

    if [ "$shouldDeploy" = "Y" ];then
        echo -e "Deploying API 2\n"
        cd ../api2 &&
        docker build -t <name> . &&
        docker tag <name>:latest<name>:latest &&
        docker push<name>:latest


    echo -e "Stop the current tasks? (Y/n)"
    read shouldDeploy

    if [ "$shouldDeploy" = "Y" ];then
        aws ecs list-tasks --cluster default | \
        sed '/\([{}].*\|.*taskArns.*\| *]\)/d' | sed 's/ *"\([^"]*\).*/\1/' | \
        while read -r task; do aws ecs stop-task --cluster default --task $task; done

    echo -e "Done"


echo -e "Are you sure you want to deploy on AWS? This cannot be undone. (Y/n)"
read shouldDeploy

if [ "$shouldDeploy" = "Y" ];then
    echo "Not deployed"

by Henrique Salvadori Coelho at November 24, 2016 04:04 PM

November 23, 2016

Matt Welke

Replacing Fields with Filters

Today my team mate and I worked on adding filters to the queries we already have using the prototype version of the filtering system he built earlier.

A “filter” is similar to our previous concept of a “field” for the query but more powerful. A field allowed the user to specify a parameter which was then inserted into our pre-built queries which were hard coded on the Get API. We called the queries we built “presets” because that made more sense. A query was really a preset combined with a user-specified parameter.

Filters, instead of taking a user specified value and inserting it into a specific part of the query, dynamically built a “match” part of a MongoDB aggregation query. The matches are pieces of the query which usually go on the very front of it to restrict what’s returned. It’s the equivalent of the “WHERE” clause of an SQL query. The filter system allows the user to build none of these matches, just one match, or many matches, and the aggregation framework of MongoDB just pipes the results through them. The user can now use any field and value for that field they want. Our job now is to help the system create useful results, protecting it from invalid filtering.

This meant that instead of creating more queries today or adding more to the existing queries, my work was to help my team mate by changing the queries to enable the use of MongoDB’s aggregation framework for each of them, and to begin coding up the various error messages that would be thrown for invalid filtering. I feel this is worth my time, since a robust querying system, including filtering, is something that has on their wishlist according to the feedback we got when we demoed what we had on Thursday.

by Matt at November 23, 2016 11:43 PM

November 21, 2016

Matt Welke

Boosting Current Queries

Today I worked on improving the queries we already have to use more display components. We used to have a “User Story” and a “Hit Story” which showed time lines of the hits the user performed and of the actions the user performed on that hit respectively. This has been changed so that we have three queries:

  • “User Story” which shows a summary of information (which can be expanded in the future) about the user *as well as* a time line of the sessions the user performed in their history. A session is a group of hits on a particular device.
  • “Session Story” which shows a summary of information about the session as well as a time line of the hits the user performed during that session.
  • “Hit Story” which shows a summary of information about the hit, including what user agent the user’s devices had during the hit, as well as a time line of the actions the user performed on the page (clicking etc) during that hit.

Each time line item shows the id of the item it represents. This means that if a user runs one query and is particularly interested in one time line item in that query’s results, they can copy and paste the id of that time line’s item and use that to run another query (which may itself produce another time line).

For example, if they run a query on a user to see their user story, they can then use the id of the last session of the results to run a session story query, and then look at the ids of the first and last hits in those results to run hit story queries. They can find a particular user and example how their browsing behavior changes between their first and last visit. Are they getting more or less interested in the material? Were those visits longer apart (implying a regular) or were they close together (implying a disinterested user who will not return).

This is an example of the kind of behavior we’re planning to implement, making this tool flexible and extensible for the staff. We aim to make all of our queries “pipe” or “flow” into each other like this. We may even be able to leverage the framework to detect what types of queries the results of one query could be piped into, and make it so that the user can do this with as little effort as possible.

by Matt at November 21, 2016 10:24 PM

November 18, 2016

Matt Welke

Polishing the Get API

After meeting with, we learned that they like what they see so far in terms of our Get API’s Visual Analysis Tool, but both they and my team lead agree that it would be better long term if they get a tool that’s easily extensible. They should be able to build queries themselves instead of relying on us.

Therefore, we’re looking into making the tool more flexible than just displaying queries that are runnable. We’re looking into either making the “fields” more flexible (allowing them to add AND/OR boolean logic to the fields) and creating a sort of query builder, which would allow them to assemble Mongo queries using GUI elements which then join the pre-built queries on the screen for everyone accessing the tool to run. My team mate is creating a prototype of this Query Builder.

Meanwhile, I’m extending and polishing the existing prebuilt queries to use multiple display types each. For example, for the “Hit Story”, instead of just displaying a time line of the actions performed on that hit (clicking on something, reaching a scroll point, etc), it also now displays information about the hit itself. Our new schema design for our queries allows them to have multiple display components in the GUI. This allows the queries to be more useful.


by Matt at November 18, 2016 10:27 PM

Laily Ajellu

BBB’s different storage systems


Bigbluebutton uses Mongo DB to store most of its data, eg. userIds, meetingIds, presentationIds, and emoji choices. Mongo DB is a NoSQL database system that doesn’t use tables. Instead, it uses JSON objects. One to many relationships between 2 objects are shown using nesting of objects.

For example: One Product object can be made of many Parts objects

Here is the Part object

localStorage and sessionStorage

For a user’s UI Settings however, our project uses localStorage because :


  • Simple to set and get values:
    • localStorage.setItem('myCat', 'Tom');
    • sessionStorage.setItem('key', 'value');
  • The Storage component is already available in all major new browsers. You don't have to use any particular packages, although we wrap Storage in our own component to add more methods
  • The settings UI data doesn’t need to persist once the user logs out, since a new userId is issued to a user every time they open BBB
  • the settings UI data does not need to be highly secure


  • This type of storage doesn’t deal with race conditions
  • No DB query syntax, therefore doesn’t scale well

Why localStorage and not sessionStorage?

localStorage has no expiration time, whereas sessionStorage is cleared once the browser window is closed. In the future, we would like to load user UI settings based on the userId. But currently, we do not have a system for storing users at all. By using localStorage we are future proofing BBB so that when a user logs in, their localStorage will still be available and populated with their UI setting choices.


William Zola: MongoDB Schema Design
Mozilla: localStorage
Mozilla: sessionStorage

by Laily Ajellu ( at November 18, 2016 12:36 AM

November 17, 2016

Matt Welke

Improving Timeline Component

Today I mainly worked on the Timeline React component used by our Get API. My team mate make lots of little polishing touches on the app while I was away at the conference which unfortunately had the side effect of breaking the existing Timeline component. After fixing that link we ended up creating a new one anyways that would look better. Strangely enough, I went back to using a <table> modified by CSS instead of the plain HTML5 <div> approach I had in my second timeline. We were able to solve any quirks from using a table for layout, and it looks great.

We’ve got that polished version hosted and by now we’ve collected lots of data since going live, so at our meeting tomorrow with, we can get some good feedback and know how they think they’ll use the Visual Analysis Tool part of our Get API to analyze the collected data. We’ve already seen some cool info come out of it, like knowing which of their authors write articles that get the users to scroll all the way to the end, and we think they’ll really enjoy it.

by Matt at November 17, 2016 11:13 PM

November 15, 2016

Matt Welke

Actions and the Schema

Today was frustrating. I was working on another query that would use the time line display component I made, and I wanted this query to display the actions a user performed (scrolling to the bottom, selecting text, etc) during a visit, and when they performed them. After figuring out the Mongo syntax to perform this query, which involved pulling out the “action” documents out of the “hit” documents, I came to the conclusion we needed to change the schema of the hits and actions in our database to make this more feasible. We were being a bit too minimalist, just storing the actions embedded inside the hits like so:

hit = {
date: …,
scrollActions: [
{ … },
{ … },
clickActions: [
{ … },
{ … },

The problem with this is that after performing the setUnion and unwind parts of our query, we lose the name of those actions. It becomes just “actions” and this means my display components, like my time line, won’t know what type of action it is. That prevents me from displaying anything about the action. All of our actions have different schemas.

After discussing this with my team mate, we came to the conclusion we could get around this by having the query send everything over without manipulating it, and just using server-side JavaScript to parse what we need out of the results when the server gets it, but this isn’t efficient long term for queries with large results, and we decided that a better schema would be better long term anyways. Our changes are going to include storing the action’s type (as a String) as an attribute of the action. This is simpler than parsing results that don’t have the action’s type as an attribute, instead trying to parse the keys’ values (and hoping that the keys didn’t get mucked up as part of the Mongo query).

I won’t be working on this again until Thursday, because I’ll be attending a Polytechnics Canada event in Ottawa tomorrow, so my team mate will be making these schema changes before I resume work, probably to continue creating queries for the Get API.

by Matt at November 15, 2016 01:00 AM

November 11, 2016

Matt Welke

Timelines and more

Over the past few days we got a chance to show the first version of the React app for the Get API that I prepared. They liked it but mentioned the interface could use some improvement. Of course, that’s expected, it’s just a crappy UI I threw together while I was learning React. My team mate has already begun improving the UI with a tagging feature to help manage all the queries on the screen, and we’ll be doing more changes as we learn from them what they want from the tool. It already supports basic data display, as well as graphs, and now I’ve also begun working on a “time line” display component, which is going to be used for displaying time lines or user stories, describing what the user did over the course of their history, or what they did during a certain visit to a web page. I didn’t find many JavaScript graphing libraries that had this feature, so I set out to create my own.

I originally made one using the <table> element, just re-using a lot of the functionality from HTML tables. My timeline was supposed to display a vertical line going down, representing time, and then beside that the things along the time line, so I figured a table’s lining up of columns etc would help with this. Unfortunately, I ended up having wasted my time because when I added this to the React app, the Bootstrap we’re using in the React app to style it overwrote a lot of basic table properties, messing up the display of my component. I should have done things right the first time and made something from scratch with <divs>. Oh well, lesson learned… I recreated it with raw <div> elements appropriately styled and arranged and so far, so good.

I’m taking advantage of the compositional nature of React components here, so instead of making a monolithic “Timeline” component, I made this component, but also made a few smaller components that make it up, including “LineSegment”, “Event”, and “Spacer”. Each has a purpose in displaying part of the time line, and make it dynamic. It’ll be quicker for us to come back in later and go “Oh we need to adjust the space between these events.”. Instead of “Oh crap we have to change the height property on every spacer and all of its child divs in there”, it’s “Yay we made a Spacer component with a ‘height’ property, so we’ll just change the height property where we use that Component”. One of the ideas I have in the future that may take advantage of this breaking it down into little parts is recursively checking what each event is comprised of (is a Hit event comprised of Clicking on button events?), and having a sort of nested recursive time line. There are possibilities.

I’m going to continue making queries which are now more focused on what wants to know, and add them into the app.

by Matt at November 11, 2016 11:18 PM

November 09, 2016

Matt Welke

Queries Set up in Get API and Ready for Data!

I’ve now finished setting up all the of the queries we made to be displayed in the GUI. We ended up not using graphs for any of them yet. The ones we made right now are simple and simple lists or tables work best for the type of results they display.

We began running the client code in production today and I’ve already linked the Get API to use this production data. It’s only a very small portion of their site that we’re collecting data from (we only collected about 100 hits before I went home for the day) but it’s neat to see the queries and my React app working great to look into this data as we collect it. I’m looking forward to showing our progress soon.

by Matt at November 09, 2016 11:57 PM

November 08, 2016

Matt Welke

Code Cleanup and Queries

Today I continued work on preparing the queries for display in the React app. There’s not much exciting to report here. It’s just me examining the way the results were arranged (the queries were prepared by my team mate while I worked on the React app itself) and transforming them into either a table or a graph. I took a lot of time today to also pause and clean up my code first. I was very new to the Koa framework that we’re using the drive the back end of the Get API, and also very new to React. I ended up with giant files (the app.js of the back end had almost 1000 lines of code) and it was a nightmare to read. I was able to factor out the individual routes and layers of middleware used so that the app.js file of the back end was less than 150 lines of code.

My plans over the next few days are to finish preparing the queries before we plan on beginning to examine the collected data on Thursday.

by Matt at November 08, 2016 09:56 PM

Laily Ajellu

Tab, Arrow Keys Space and Enter - Get 'A' Certified

Some keys must be programmed to make navigating your web application accessible. These keys are:
  1. Tab and Shift-Tab
  2. Up, Down, Right, Left arrows
  3. Space and Enter

Tab and Shift-Tab

The user should be able to jump from element to element when they press tab.

For example, in a navigation bar, tab moves the focus to the next option in the menu.

(Pressing the right and down arrow keys should do this same thing)

tab and shift-tab key code example

Up, Down, Right, Left arrows

Use down and right arrow key to move the focus to the next submenu in a drop down menu, and use up and left arrow keys to move the focus to the previous submenu.

arrow keys code example

Space and Enter

These two keys should both select (click) the element that has focus.

space and enter key code example

Example Code

To understand the code below, keep these things in mind:

  • In HTML, each key is associated with a unique keyCode.
    For example, the keycode for Tab is 9.
    You can find the list at: CSS Tricks: Key Codes
  • The focused menu item here is the menu item that the user has moved focus to, but hasn’t chosen yet. This is usually shown visually with a dotted box surrounding the menu item or it might be highlighted.

    Example of a focused menu item
  • The active menu item is menu item you chose (by clicking, pressing space or enter). Focus may be on a different menu item.
References: Find the original Code here:

by Laily Ajellu ( at November 08, 2016 07:44 PM

Andrew Smith

Start-em early, workbench for my 2-year-old

My son likes tools. I hope that he keeps getting better at using them and he’ll know more by the time he’s 12 than I do now. To get him started I needed a real workbench for him, which is his own. I’ve been planning this for a while, and finally got it done. Here it is:

nikita-workbench-smallIt’s made almost 100% from leftovers. Legs from a table cut shoter and painted. 3/4″ g1s pine plywood and pine trim from another project, leftoever 1/2″ pegboard pieces, and random old tools from me and grandpa. The only new thing is the vice.

Except for the legs and the pegboard the entire thing is held together with biscuits. It’s my first attempt at using the biscuit joiner and I’m pretty impressed with the results, even though I screwed up one edge by cutting on the wrong side of the corner :)

by Andrew Smith at November 08, 2016 05:10 AM

November 07, 2016

Matt Welke


Today I realized as I implemented dygraphs on the React app that we need to go beyond graphs. Not all of the data we query is going to look best in a graph. Sometimes it might be better in a table. Or, the query produces a yes or no answer or a key value pair answer. If the answer is as simple as “The number of users fitting your criteria is 42.” then we don’t need a graph.

I’m refactoring out the functionality in the React app that displays query results as graphs into individual components that will render query results differently. So far I’m making a Graph component (which is pretty self explanatory), a Table component (which breaks down into a <table> with a heading, some rows, and maybe later on some neat buttons they can click for sorting, hiding, or aggregating rows), and a List component. The list component is for the simplest of queries which just return key value pairs. A list item would be a key and a value. For example:


  • Active users: 42
  • Inactive users: 84

by Matt at November 07, 2016 10:26 PM

November 04, 2016

Matt Welke

Search Fixed,Queries Implemented, Graphs Next

Slowly and steadily, the UI is coming together. I moved the queries my team mate developed into the app, so we have about 20 queries developed as per’s wishlist.


With my team mate’s help, we improved my searching mechanism to use RegEx and it helped with accuracy issues. Turns out I did indeed just choose the wrong CSS to implement my hiding as part of the searching.

When a query is deemed “not a match” it gets a “hidden” React property set to “true” (if it is a match that property is set to “false”. That got interpreted deep down in the React code as “make this element have the CSS property ‘visibility’ set to ‘hidden'” if that property were true. Otherwise, the CSS visibility property would be set to “visible”. However, this had the effect of leaving the physical space on the DOM for that element reserved. What I actually wanted was to be changing the “display” CSS property – “none” if hidden were true, else “block”.

Once I made that change, combined with the switch to RegEx, the searching works perfectly. It will let the staff search for queries by their title and description. It would be trivial to add more parts of the query that the searching mechanism is able to look at, or to make the searching mechanism use checkboxes or radio buttons for more types of searches (AND vs OR) or to specify what parts of the queries to search. The screenshot below shows the results of performing the default AND type search with the term “hit rate”. The graph displayed has nothing to do with the data from that query, and is just there to illustrate what might be displayed after the “Run Query” button is invoked.


I began looking into integrating Dygraphs into the React app, to allow the query results to be shown graphically. Most humans don’t speak JSON. This will be something usable by anybody, including those who aren’t particularly technical.


As usual, when considering what libraries to use in the project, we opt to use stable libraries with lots of community support. Dygraphs definitely qualifies. I can’t wait to see how this will display the production data we’ll soon begin gathering.

by Matt at November 04, 2016 09:54 PM

Henrique Coelho

Tonight on Animal Planet: SSL Certificates

This is something very important that I never really understood: how to use SSL certificates. They are extremely important if you are maintaining a Web Application, but I never really bothered to read about it - it was the secure elephant in my room. But now I finally got the motivation (pure pressure and necessity) to research about it, and this is what I learned:

SSL certificates are supposed to make your website more secure (duh), and they do this by ensuring:

  1. Encryption - your data will be encrypted, which is good!
  2. Data integrity - your data will not be broken, which is good!

It also has some nice side-effects:

  1. Green address bar - your address bar tuns green, which is good! I mean, your visitors will know that your website is secure. If you do financial transactions or collect important information, your website will be a lot more attractive with that pretty, green bar for your users.
  2. Prevents attacks - since your data will be encrypted, it will be (almost) impossible to steal it with "man in the middle"attacks. This is good too.
  3. Boost in ranking for searching engines - Google, for instance, will rank your website better if you use HTTPS. This is probably good.

HTTPS certificates are not free, and since you have to pay for them, you probably should think about your priorities: do you really need HTTPS in your blog that nobody reads? Probably not. Do you need HTTPS in a website with financial transactions? Probably yes.

So, first of all, how do you get a certificate? Simple. Follow these steps:

Step 1: pick your certificate type

An Overview of the Basic Types of SSL Certificates Available
Certificate Type Types of Sites Features
Extended Validation (EV) * eCommerce * Sites collecting personal info * Sites where user trust is paramount * 2048-bit encryption * Green Bar to provide top-of-the-line trustworthiness * The type used by web giants like Twitter, banks, etc. * Issued in 3-5 days
Organization Validation (OV) * eCommerce * Sites collecting personal info * Verified that the site is a registered government entity * 128-, 256-, or 2048-bit encryption * Issued in about 24 hours
Domain Validation (DV) * Testing Sites * Internal Sites * Non-eCommerce Sites * Very affordable * Issued almost immediately

In addition to the 3 main types above, we also have:

Multi-domain certificates

If you need to secure multiple domains, but only want one certificate: you can have up to 100 domains in your certificate, and if you get another domain, you can just add the domain to it.

Wildcard certificates

With these, you can secure your website, as well as any subdomains. They can be both DV or OV, but not EV.


Step 2: buy the certificate

There are several websites where you can buy certificates. Sometimes your own hosting company will offer this service. The price and speed for issuing the certificate will vary.


Step 3: install the certificate in your website

This will depend on what host you are using - they will have their own methods to apply the certificates. If you are using Node.js, you will probably have to use it directly in the .js file that creates the server.


Step 5: update links and images

Make sure all links in your website point to an https route instead of http. Do the same thing for images, CSS, scripts, and so on. It is also a good idea to redirect the traffic coming from any http route to the https route.



by Henrique Salvadori Coelho at November 04, 2016 12:53 AM

November 03, 2016

Matt Welke

Adding Useful Queries, Learning How Things Fit Together

Today’s highlight was going to’s office to learn in person how they have their current technologies fitting together. We know that we will eventually need to link in the data we gather to their Elasticsearch service so that it improves the quality of recommendations made to their users. We learned that they use AWS’s API Gateway to create serverless APIs that their DNN CMS connects to for recommendations.

That API provides a layer of abstraction. The DNN system doesn’t know how it gets recommendations, it just knows how to query it for them. This is good because it means we can build either a separate API linked to by their API or simply augment their API to add in our data. Or, we will simply add our collected data to their Elasticsearch service. Right now, every 15 minutes, they scan for changes in their articles etc and move that data into Elasticsearch so that it’s usable for searching. We may create a tool to periodically inject data about the users, and help them create Elasticsearch queries that use that new data. Either way, everything seems nicely-decoupled and open. It shouldn’t be hard to add in our features.

Progress on the React app “Visual Analysis Tool” (VAT) we’re creating is going well too. I’m moving in the queries that my team mate developed so that we can see how they look in it. Because of the pattern I developed for storing the queries, it’s very fast and easy to add a new query in. The React app gets the info from the server serving the app, so there’s no changes to be made to the client side React app code when a new query is added. We’ll be having our client module code start running on a small part of their site on Tuesday, so that we can begin collecting useful data to play with. That’s when the VAT will start to shine.

by Matt at November 03, 2016 09:04 PM

Laily Ajellu

Localization - Get 'A' Certified

Accessibility and Localization

Accessibility means that anyone in the world can access and use your application. This, of course, includes people who understand languages other than English. We do this through localization, the process of converting your app content to the local language.

Services like Google Translate can translate your page easily for your users, but what can you do to ensure that your page is translated?  

HTML and localization

Use the lang attribute.
Put it on your top level tag, the html tag.  This way, all of its children will inherit the language as well.  

You may want to have more than one language in your application content.
For example, if your webpage teaches Chinese to its users you might have something like:

The lang attribute above would only apply to the span and its children tags, if it had any.

A Little Warning

You might have thought that the top level tag to place the lang attribute would be the DOCTYPE tag.


And in fact, there is an attribute that indicates language on this tag.  But this is the schema language, and not the language of your content, so be careful not to mix the two up. You still need to use the lang attribute.

Language Direction

Some languages like Arabic, Farsi and Hebrew write their text from right to left. Setting the lang attribute in this case is not enough to ensure that the content will be understandable.

Use the dir attribute (dir stands for direction) to make the text be rendered from right-to-left.

You can also set the font in CSS to help with readability:

This way, if your application detects that the language being using is Arabic, it will choose Traditional Arabic as the font family. If the user’s browser doesn’t have that font, it tries “Al Bayan” and then finally “serif”

How it’s implemented at BigBlueButton

At BBB we have a JSON file that holds all the strings that are shown/read or otherwise presented to the user (eg. presented with an accessible device like a braille display).  

All of these strings have the ability to be translated to different languages.  We ask the open source community to help us translate these strings in all the languages of the world, and some of languages have complete translation at this point whereas some others don’t.  

To tie it all together, we use a react component called FormattedMessage. FormattedMessage has a few attributes:
  1. Id: this id matches the key in the JSON file containing this same string
  2. defaultMessage: The string in English (to default to in case there is no translation made for that string yet by our community)
  3. description: A description of what the string is for, so that other developers working on the code can identify its use


by Laily Ajellu ( at November 03, 2016 02:48 AM

November 02, 2016

Matt Welke

Searching Added to React App

Today I added styling and search functionality to the React app. Searching will be necessary because of the massive number of queries the staff may save to the tool. We want to make this as user friendly as possible for them. This is especially true because a number of their staff who may use the tool may not have a technical background.


In addition to searching, I will soon be adding categorization so that they can simply choose a tab and it will display all queries that belong to that category. The search I made right now has a bug in that it simply hides some elements on the screen (but leaving their spot on the DOM reserved) which makes it look a bit amusing…


React blows my mind in terms of how simple it is to maintain. I already know how I’ll fix this bug. Assuming it’s not simple a poor choice of CSS class, I’ll need to split my “query” array into two arrays. One to store the queries and one to store the ones we’re currently displaying (according to the current search term or tab selection). And there’s only one spot in the application to make that change, one block of scope. I won’t have to worry about that change of state affecting other parts of the web app because state flows downward from the ultimate parent “App” component down into the little components that make up what you see in those screenshots.

Next steps will be to add a graphing library to display the results in better results than just JSON…


And to link in the queries my team mate developed according to’s interests.

by Matt at November 02, 2016 08:41 PM

November 01, 2016

Matt Welke

React Lives!

The work on the React app, both back end API to drive it, and the front end app itself is finally paying off. The app is functional. It allows you to see the queries we created (where the parameters are coming from the data store), edit those parameters and save that to the data store, and duplicate the queries to create more configuration. The nature of React also means that making these changes, one button click at a time, provides as much visual feedback as we want it to. We just change the state of the parent component and the whole thing can refresh. It fits together quite nicely.

Soon, after some styling and more thoughtful layout etc, this thing will be pretty useful.

by Matt at November 01, 2016 08:58 PM

October 28, 2016

Matt Welke

Working on Get API Back End

This part of my work has been challenging. The React app itself is working well now. It’s able to have a user interact with it to see the queries, modify the values in the fields, and run the customized query. But up until now, we’ve just been having it read these “presets” (from which all customized queries are derived) from hard code in the Get API. There were no CRUD operations configured for the customized queries. I’m working on implementing that now. The user not only needs to be able to run queries with customized fields, but also save them. This is why I need to spend time developing this.

We made a mistake in planning about a week ago, where we wanted to do everything related to queries in one GET route. They would read the queries by not including an id query string parameter and not including a save parameter. If they included an id query string parameter, it became a “read one”. If they included both the id and save query string parameter, it became an “update” in terms of CRUD operations. This seemed okay at the time (even though I recognized it broke RESTful conventions) because we didn’t realize how complicated it would get if we needed to update or create new customized queries.

Right now I’m refactoring the code to split it into multiple routes, some GET and some POST, all of which abide by RESTful conventions, meaning that a GET request does not modify state in the data store. Once this is done, the React app will need some re-adjusting to use these new routes for the functionality I already gave it (reading and running queries), and then I’ll have to add the rest of its needed functionality (updating queries and creating new ones, and perhaps deleting them too). This sounds like a lot of work, but I’m not feeling too bad about it, because my experience creating this part of our project has taught me a lot about planning APIs and I have a much clearer picture now about how things should fit together.

by Matt at October 28, 2016 09:12 PM

Henrique Coelho

Benchmark: NodeJS x Perfect (Swift)

Yesterday (October 27th, 2016) I went to a presentation called "How to Completely Fail at Open-Sourcing", presented by Sean Stephens, at FSOSS - Free Software and Open Source Symposium hosted at Seneca College. Sean Stephens is the CEO of PerfectlySoft Inc, the company that developed Perfect, a library for server-side Swift development. This immediately caught my attention: I've been thinking about server-side development in Swift for a while, and it seems that it finally happened.

During the presentation, Sean showed us some benchmarks where Swift (using the Perfect framework) beat NodeJS in several fronts. You can see more details in this post. Since I recently benchmarked PHP x NodeJS (around a month ago), I decided to use a similar scenario and test Perfect x NodeJS. This is how I set it up:

I wanted 2 servers: one with Perfect, and the other one with pure NodeJS. For every request, they would go to MongoDB, fetch all results, append some text to the response, and send it back. I used siege as the stress tester, in order to simulate concurrent connections.

I set up a virtual machine with 1 processor core, 512Mb of RAM and 20Gb of storage; the machine was running Debian Jessie. In this machine, I installed Docker and made 3 images:

1st image: MongoDB

FROM ubuntu:16.04
RUN apt-key adv --keyserver hkp:// --recv EA312927
RUN echo "deb $(cat /etc/lsb-release | grep DISTRIB_CODENAME | cut -d= -f2)/mongodb-org/3.2 multiverse" | tee /etc/apt/sources.list.d/mongodb-org-3.2.list
RUN apt-get update && apt-get install -y mongodb-org
VOLUME ["data/db"]
WORKDIR "data/"
EXPOSE 27017
ENTRYPOINT ["/usr/bin/mongod"]

2nd image: NodeJS

FROM node:wheezy
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install
COPY . /usr/src/app
CMD [ "node", "index.js" ]

var http = require('http');
var mongodb = require('mongodb');

mongodb.connect('mongodb://', function (err, db) {
  if (err) { console.log(err); }
  http.createServer(function (req, res) {
    var s = "";
    for (var i = 1; i <= 1000; i++) {
      s += '' + i;
    db.collection("test").find({}).toArray(function(err, doc) {
      res.end("Hello world" + JSON.stringify(doc) + s);


  "name": "node",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  "author": "",
  "license": "ISC",
  "dependencies": {
    "mongodb": "^2.2.10"

Perfect (Swift)


# Copyright (C) 2016 PerfectlySoft Inc.
# Author: Shao Miller 

FROM perfectlysoft/ubuntu1510
RUN /usr/src/Perfect-Ubuntu/ --sure
RUN apt-get install libtool -y
RUN apt-get install dh-autoreconf -y
RUN git clone
WORKDIR ./mongo-c-driver
RUN ./ --with-libbson=bundled
RUN make
RUN make install
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . /usr/src/app
RUN swift build
CMD .build/debug/App --port 8000

import PackageDescription
let package = Package(
  name: "App",
  targets: [],
  dependencies: [
      url: "",
      majorVersion: 2,
      minor: 0),
      majorVersion: 2,
      minor: 0)

import PerfectLib

import PerfectHTTP
import PerfectHTTPServer
import MongoDB

let server = HTTPServer()

var routes = Routes()
routes.add(method: .get, uri: "/", handler: {
  (request, response)->() in
    response.setHeader(.contentType, value: "text/html")

    let client = try! MongoClient(uri: "mongodb://")
    let db = client.getDatabase(name: "test")
    guard let collection = db.getCollection(name: "test") else { return }

    let fnd = collection.find(query: BSON())

    var arr = [String]()
    for x in fnd! {

    defer {

    var s = ""

    for x in 1...1000 {
      s += String(x)

    response.appendBody(string: "Hello world {\(arr.joined(separator: ","))}\(s)")


server.serverPort = 8000

do {
  try server.start()
} catch PerfectError.networkError(let err, let msg) {
  print("Network error thrown: \(err) \(msg)")

By the way, I'm sorry if my Swift code is not Swifty enough - I am just a JavaScript peasant. But anyway, these are the results I got:


500 users 1,000 users 1,500 users
NodeJS Perfect NodeJS Perfect NodeJS Perfect
Number of hits 1284 1273 2293 2284 3641 3556
Availability (%) 100 100 100 100 100 100
Data transferred (Mb) 4.08 4.26 7.28 7.64 11.56 11.9
Reponse time (s) 0.04 0.07 0.41 0.44 0.41 0.12
Transaction rate (/s) 84.89 86.25 161.37 161.19 250.76 250.78
Concurrency 3.85 5.84 65.67 71.08 102.82 30.17
Shortest transaction (s) 0 0 0 0 0 0
Longest transaction 0.22 0.27 7.12 7.16 7.13 0.36

The results were remarkably similar, I actually double checked to make sure I wasn't making requests to the same container. There are some discrepancies, but I would attribute them to statistical error.

Giving that we chose NodeJS for our project because of its resiliency, I think it is safe to say that Perfect is also a very good choice for APIs that are constantly under heavy load.

by Henrique Salvadori Coelho at October 28, 2016 02:00 PM

October 26, 2016

Matt Welke

Query Parameters Working on React

The front end React app is now working with query parameters. This meant orchestrating it so that the queries would accept parameters, and deciding where the “state” would exist in my React components. React handles things in a very functional way, where state that involves multiple components should reside in the parent components (which they call owners) and propagated down to the child components (which they call ownees) through properties. State is mutable, properties are immutable. Though this adds complication as you learn and develop, it keeps things organized long term, and you can code with confidence, knowing that the state is only in one spot up what you could call the “ownership tree”. Just as with any React component, when the state in the ultimate owner changes, it renders itself which means re-rendering all of its ownees whose renderings are out of date.

So, with the query parameters as state of the Query component (not the individual Field components our users modify), it keeps all the state in one spot, in the same spot as the code that actually performs the query that uses those parameters, and we now have Query components that should completely take care of rendering themselves and their output in any situation in our React app.

It’s still completely unstyled, so my work will now be to style it and prepare it for when we begin linking it to production data soon.

by Matt at October 26, 2016 11:02 PM

October 25, 2016

Laily Ajellu

Consistent Visual Cues - Get 'A' Certified

When a user is browsing the web or using an app, they should instantly know how to use it.
Keep these user design points in mind to make your app feel familiar to your users!

Use the same (or similar) visual cues as the majority of other webpages.
For example, use similar:
  • Icon images
  • Button names
  • Page layouts

This allows the user to spend less time learning how to use your website.

The feature and icons listed are common to a lot of applications. If you use any of these buttons, follow the format below:
  1. One button Linking to two different placesDon't link a button's icon to a different address than where its alternative text links to.

  2. Icon that depicts a document
    it must have a text alternative: "Download myDoc.docx"
    Format: Download {document name}

  3. Icon that depicts a search bar
    it must have a text alternative: "Search"
    (Not "Find")

  4. Icon that depicts a printer
    it must have a text alternative: "Print myDoc.docx"
    Form: Print {document name}
    "Print bill"
    Form: Print {document type}

Consistent Identification

by Laily Ajellu ( at October 25, 2016 10:15 PM

Matt Welke

Reacted to Making my React App react to Queries

I’ve made some progress with the first prototype of our Get API’s visual tool. We ended up having a bit of work to do to organize things. We originally planned on having the queries the tool needs stored on a server and sent via AJAX to the client-side React tool. Our reasoning was that the connection string for the MongoDB database should not reside in client side code. After speaking with our team lead, we came to the conclusion this wasn’t a big concern, because the MongoDB login can be made read-only, and this tool is only for staff anyways. So that simplified our design somewhat. There would be no need to make an API just for the React app to query (via AJAX) to get its (MongoDB) queries.

As I began coding, I realized that we would indeed have to go back to the original plan of having an API serve the queries. The MongoDB driver and the Mongoose ODM that we need to use to perform these MongoDB queries needs to run on Node.js. It’s all server side technology, not client side. So no matter what, this has to run on the server. So our final way this is organized is:


  1. Node.js server with a GET route that serves the query objects (which include info for React to display them, and an execution component for the server to actually run).
  2. The React app using the axios library to query this API via AJAX to get what it needs to display query choices to the user browsing it. The app’s button onClick action is to use AJAX to tell the Node.js server to actually execute that chosen query.
  3. That same Node.js server then using Mongoose to query our MongoDB database hosted on AWS (Amazon Web Services), returning to results of that query to the React app to display to the user, fulfilling the last AJAX request mentioned above.

It might seem complicated at first, but this is actually a good separation of concerns in my opinion. It’s flexible in that we can allow it to save and modify queries (since they can be persisted by the Get API Node.js server), and there is absolutely no way an unsafe query can be executed against our data store. The precious data stays safe. Nice.


The next steps for me over the next few days are to refine this (right now very ugly) prototype, and combined with production data we plan to collect asap, learn more about what kind of queries it will provide the staff.

by Matt at October 25, 2016 09:20 PM

October 24, 2016

Matt Welke

Finished Presentation, Worked on React

Today I spent most of the time preparing for my Monday project presentation. It went well, and the other RAs found the demo I did interesting, where I showed them our server logging information from them browsing our dummy page as they browsed it during the demo.

After the presentation, I continued to work on the React app for part of our Get API. It’s challenging to get it integrated properly. I’m not used to going beyond typical MVC type web apps and needing to send a huge chunk of JavaScript to the client which really does the work. Once I got it all set up (which ended up being a WebStorm project nested within a WebStorm project) it works well. I’m still learning how to use React so I haven’t got much of a prototype yet, but I think it’s promising.

by Matt at October 24, 2016 08:58 PM

October 21, 2016

Matt Welke

Working on the React UI

Nothing terribly exciting to report from today. We continued working on the front end API which consists of a web app serving up a React app that runs in the browser. We need the web app to do a bit more work than just serve up the React app though. We need it to be able to keep track of queries the users of the React app create (by create I mean change a parameter, nothing complicated). So we’re using a Node.js-based web app framework to create this, just like we did for the back end API before. We decided to use Koa instead of Express this time though. We found Express to be a bit heavy before. For the back end API, we ended up just falling back to pure Node.js instead of even using Express. For this, we will need some routes but Koa is more modern (it uses JavaScript generators instead of callbacks to improve maintenance) and smaller. You choose components to add to your project rather than getting the whole framework. Don’t need a view engine? Don’t need routes? No problem, don’t import those modules. Koa is actually created by the team that made Express, so it’s thought of as the successor to Express for the ES6-and-beyond world of JavaScript server side web development.

by Matt at October 21, 2016 09:16 PM

Laily Ajellu

How to avoid the top 15 Accessibility Mistakes

When designing an accessible website, there are some common mistakes or misconceptions you may run into. This post describes how to avoid those mistakes so you can start off your accessibility development correctly.

It's always easier to design and develop with accessibility in mind, rather than to add it in at the end. This is because it's more of a question of UI.

If you design a beautiful, yet inaccessible site and develop it, you've wasted a lot of time on something that has to be re-developed.

The key is to design a beautiful, and accessible site from the get go, so you don't have to re-code everything with a new design.

  1. All images, whether decorative or informative, should have an alt text. Refer here for how to implement the alt property: Alt Property
  2. Allow a user to use the keyboard to do the same things a mouse user can do. If you can click on something, you should be able to use enter and space to do the same thing.
  3. Don’t trap users in a keyboard navigation loop. Make sure the user can get to all the components by tabbing and other shortcut keys. Test this thoroughly to make sure you can get to all components.
  4. If dynamic content shows up on a webpage tell the user it just appeared.
    Eg. When you choose your country in a form, and a list of cities is loaded based on the country chosen. The user should be notified that this new dropdown menu has appeared on this page.
  5. When a user navigates using the keyboard, tell the user what they have landed on. Is it a settings button? An input field for your name?
  6. When you tell the user what they have landed on, don’t just tell them it’s a button, tell them what the button does. If you have 20 buttons on a page the user is just going to hear “button button button button ...” which doesn’t give the user any context at all!
  7. Don’t use tables to structure your page (old way of web dev), because the user will think they’re in something like an excel spreadsheet.
  8. When navigating data tables, the user should know what column heading and row heading they’re at, and the value of that cell.
  9. Use headings to indicate what this part of the page is about. Are they at the nav bar? Are they at an article to be read? Are they at a menu where they’re supposed to choose an option?
  10. Don’t use only color to convey info. For example, telling the user:
    "As you can see from the highlighted part of the code, this is the proper way to use a <button> tag"

    Which refers to this code somewhere on the page:

    Accessibility users may not be able to see yellow.
    You can restate the part of the code in aria you want to refer to:
    "This is the proper way to use a <button> tag:
    <button type="button">Click Me!</button>"

  11. Use captions for video audio. Also, use captions to describe images.
  12. When an accessible user tabs to a component and chooses an option, for example: “Mute Audio”, don’t reset the tab order to the beginning of the page, restart the tabbing where they left off. (At the “Mute Audio” button)
  13. Allow to the user to skip over a navigation element easily, especially if it exists on all the pages of your multi-page website. Also, allow the user to skip to different sections of the page.
  14. Give the page a title. The URL will be read, but this is often not as clear as a simple title indicating to the user what page they’re on.
  15. Do not create another page for accessible users and ask them to use it instead, because this segregates them from other users which is inhumane.

Credit to Todd Liebsch for describing most of these common mistakes!

Please leave questions and comments below :)

by Laily Ajellu ( at October 21, 2016 04:04 PM

Henrique Coelho

JavaScript Generators

Asynchronous programming in JavaScript seems to be a double edged sword: on one side, you have a program that does not block the I/O, on the other side, you have Callback Hell and promises. People seem to love promises, I absolutely hate them. Yes, I know they simplify Callback Hell, but it they don't do it well - they are still ugly and messy. There. I said it.

So, what exactly is the problem again? The problem is: suppose you have a series of asynchronous functions that you want to execute sequentially:

function asyncFunction1() {
    setTimeout(function () {
        console.log('- 1');
    }, 1000);

function asyncFunction2() {
    setTimeout(function () {
        console.log('- 2');
    }, 500);

function asyncFunction3() {
    setTimeout(function () {
        console.log('- 3');
    }, 0);

function run() {


I wanted:
- 1
- 2
- 3
- 3
- 2
- 1

This didn't work because the last function executed much faster than the first one. Ok, what is the solution? Callbacks:

function asyncFunction1(cb) {
    setTimeout(function () {
        console.log('- 1');
    }, 1000);

function asyncFunction2(cb) {
    setTimeout(function () {
        console.log('- 2');
    }, 500);

function asyncFunction3(cb) {
    setTimeout(function () {
        console.log('- 3');
    }, 0);

function run() { // EEEEWWW -v
    asyncFunction1(() => {
        asyncFunction2(() => {
            asyncFunction3(() => {});


- 1
- 2
- 3

It solved the problem, but now the run function looks messy and weird. So how exactly generators can solve this problem?

First, I want to explain what is a generator: it is a function that can be stopped and resumed. Useless, right? This is how it looks like:

// The notation function* () indicates a generator
function* Sequence() {
    // The yield keyword stops the function at that point
    yield 1;
    yield 2;
    yield 3;

// Here we are instantiating the generator - it did not run yet!
const sequence = Sequence();

// When we call .next(), the generator resumes

What we get from a .next is an object with 2 memebers: value,
which is the value that was yielded, and done, a boolean
indicating if we are done or not. In this example, we called
.next() 4 times:
{ value: 1, done: false }
{ value: 2, done: false }
{ value: 3, done: false }
{ value: undefined, done: true }

It doesn't seem very useful yet, but we can get some interesting functionality from it. In the case below, the generator will never be completed, but every time you call .next(), it will increment a counter in 1:

function* Sequence() {
    let i = 0;
    while (true) { yield i++; }

const sequence = Sequence();


{ value: 0, done: false }
{ value: 1, done: false }
{ value: 2, done: false }
{ value: 3, done: false }
{ value: 4, done: false }

Or maybe we could use it to square the number, instead of incrementing:

function* Sequence() {
    let i = 2;
    while (true) { yield i, i *= i; }

const sequence = Sequence();


{ value: 2, done: false }
{ value: 4, done: false }
{ value: 16, done: false }
{ value: 256, done: false }
{ value: 65536, done: false }

Another important aspect of a generator is that the 'yield' keyword returns a value: the value is whatever is passed as a parameter for the .next() function. In the next example, I'm using the value passed to .next() to reset my sequence:

function* Sequence() {
    let i = 1;
    while (true) {
        const reset = yield i++;
        if (reset !== undefined) { i = reset; }

const sequence = Sequence();


{ value: 1, done: false }
{ value: 2, done: false }
{ value: 3, done: false }
{ value: 100, done: false }
{ value: 101, done: false }
{ value: 102, done: false }

It is important to notice that in order to reach a yield, you need to call a .next() first, otherwise the parameter will simply get tossed away. This example will not print the first parameter:

function* Sequence() {
    while (true) { console.log(yield); }

const sequence = Sequence();"a");"b");;;


So, where exactly do generators solve the problem with asynchronous functions? Ok. Let's start easy. Suppose we have a sequence of 3 typical functions, being the one in the middle asynchronous. This is how we would make them execute in sequence:

function asyncFunction(value, cb) {
    setTimeout(function () {
    }, 1000);

function syncFunction(value) {

// This function executes them in the order we want
function run() {
    syncFunction('- 1');
    asyncFunction('- 2', function () {
        syncFunction('- 3');

- 1
- 2
- 3

Notice the pattern: when we have an asynchronous function (asyncFunction), we give it a callback (syncFunction) to execute when it is done. Now here is the magic: if the function "run" was a generator instead of a normal function, we could make it yield right after the asynchronous function call, and instead of the asynchronous function calling the next function when it is done, it would resume the generator!

This is how it could look like:

let run;

function asyncFunction() {
    setTimeout(function () {
        console.log('- 2');;
    }, 1000);

function syncFunction(value) {

// Notice how cleaner the block of this function is
function* Run() {
    syncFunction('- 1');
    yield asyncFunction('- 2');
    syncFunction('- 3');

run = Run();;

- 1
- 2
- 3

A bit messy, right? But you probably agree that the function "run" (now the generator "Run") looks a lot nicer. We can make a wrapper in order to make it look much cleaner.

In this case, I made a function called executeGenerator, which takes care of the ugly part. This is how nice a sequence of 5 function (3 of them asynchronous) would look like using this wrapper:

function syncFunction1() {
    console.log('- 1');

// Notice how the asynchronous functions still think they are
// using classic callbacks - this means that libraries that
// rely on callbacks would still be compatible
function asyncFunction2(cb) {
    setTimeout(function () {
        console.log('- 2');
    }, 1000);

function asyncFunction3(cb) {
    setTimeout(function () {
        console.log('- 3');
    }, 500);

function asyncFunction4(cb) {
    setTimeout(function () {
        console.log('- 4');
    }, 0);

function syncFunction5() {
    console.log('- 5');

function* run() {
    yield asyncFunction2; // <- no function call here, we
    yield asyncFunction3; //    are yielding the asynchronous
    yield asyncFunction4; //    functions themselves*


- 1
- 2
- 3
- 4
- 5
  • If you had to pass parameters in that case, you could just use .bind to prepare the function.

It not only looks a lot simpler and cleaner, it is still compatible with asynchronous functions that use callbacks. This is how my wrapper looks like:

function executeGenerator(Gen) {
    const gen = Gen();

    function cb() {

    function executeTask() {
        const nextTask =;
        const nextAsyncFunction = nextTask.value;
        const isDone = nextTask.done;
        if (isDone) { return; }


There are 2 sub-functions there: 1- cb is a fake callback (I'll explain what it does later) 2- executeTask gets the yielded values from the generator (the asynchronous functions) and executes them, sending the fake callback we just made as the callback

We first start the generator, and then resume it to get the first yielded value (the asynchrnous function), we then execute it passing a callback that will execute the task again: it will resume the generator (which will get the next yielded asynchronous function) and execute the next asynchronous function with the fake callback, which will resume the generator, and so on until our generator is done.

There are several libraries that simplify asynchronous programming with generators, like "co". Despite being hard to understand in the beginning, they are a great alternative to classic callbacks and promises (at least until "async" and "await" are implemented, which will probably happen for the next version of JavaScript).

by Henrique Salvadori Coelho at October 21, 2016 02:00 PM

October 20, 2016

Matt Welke

Multiple Front End Tools

Today we had a rather productive meeting with where we got a better idea of what they wanted when it comes to the front end API. We originally envisioned two options for tools. One would be a simple collection of prepared queries, the other would be a powerful graphical query builder. It turns out they don’t really care for a powerful query builder.

They’re more interested in the answers those prepared queries would provide, provided we spent a lot of time carefully making them so that the tool is useful for them long term. They also want a way to analyze the data we collected and calculate information that would be used to categorize the visitors. For example, an affinity score might be how well a certain user likes a certain category. Those variables can be integrated into their Elasticsearch system, and this augments their recommender engine.

This two-pronged approach, actively getting analytical answers and passively augmenting their recommender engine, is definitely the best way we can use the data we collected. The scope of this project is huge in terms of the number of applications we’ll be making. It seems there’s a new API being proposed every few weeks. But then again, splitting our application into multiple parts like this is probably a good way to organize things. It’s probably going to be flexible and easier to look at later on when it comes time to take useful work we’ve done and consider publishing it as open source software. All these APIs will be worth it!


by Matt at October 20, 2016 08:42 PM

Laily Ajellu

Reduce Flashing Lights - Get 'A' Certified

Flashing images and blinking lights can be very dangerous for some users, and uncomfortable for others. This post will help you follow accessibility guidelines to get your web app 'A' certified.

Who does this feature affect?

  • People with photosensitivity (light - sensitivity) seizure disorders
  • People with migraine headaches
  • Everyone (Remember, accessibility features create a better experience for all)

How to implement:

  1. Content on the page should not change or flash more than 3 times per second.
  2. If it is necessary for the content to change more than 3 times per second:
    • Show the flashes on a small part of the screen (less than 21,824px square area)
    • The above square area is for the average screen, but if you know the size of the screen your content will be displayed on, there’s a formula you can use to calculate the safe square area: Calculation instructions
    • This should be the only flashing area on the page
  3. Reduce the colour contrast for flashing content
  4. Reduce the screen light contrast for flashing content
  5. Don’t use fully-saturated red colour for flashing content
  6. Use analysis tools to check if your webpage passes:

How Not to implement:

  • It’s not enough to have a button for the user to stop the flashing, because a seizure or migraine can be induced extremely fast. The user wouldn’t have enough time to press the button.
  • Do not just put a seizure warning:
    • because people may miss them
    • children may not be able to read them

Examples of When an Application might use Flashing lights

  • A slide in a presentation changing
  • A video of explosions going off
  • A video of a concert with strobe lights
  • A video of lightning flashing

Interesting facts:

These guidelines were originally for TV programs, but now they’ve been adapted to computer screens taking into consideration:
  • The shorter distance between the screen and the eyes
  • That the computer screen takes up more of our field of vision when we’re looking at it
  • An average of 1024 x 768 computer screen resolution


Three Flashes or Below Threshold

Image References:

Lady with a migraine
Child on the computer
Fully saturated Red

by Laily Ajellu ( at October 20, 2016 02:59 PM

October 19, 2016

Matt Welke

Reacting to Our Need for React

Today I began work on my third mockup, which is when we decided to dive into a heavier front end JavaScript framework. We studied the idea of doing our thorough dynamic UI for querying our back end API and decided that even though it would be difficult to implement, it was feasible. However, we definitely need something more advanced than vanilla JavaScript or even jQuery. We’re going to end up with many HTML elements, many of which will have events associated with them so that they manipulate other elements or talk to servers, etc. And the biggest problem with this is that if you have a client side app that’s removing things from the DOM and adding them in, you need to re-setup all your events for the DOM elements when they’re added back in. This would be a nightmare for an app of this scope.

When it comes to modern front end JavaScript frameworks, ReactJS (aka React) is probably the best bet for us. It has an ingenious system for bubbling events up to one root element which then interprets the events and decides what to do. Therefore, transient DOM elements won’t be a concern to us. This pattern is called “event delegation” and the idea has been around for a while, even before React implemented it under the hood. React also lets you create your own “components” which contain a combination of state and presentation, where the presentation is whatever HTML representation of your metaphorical component you can dream of.

This way of programming brings me back to my days of learning GUIs with Windows Forms. I let that thinking rot in my mind when I got immersed into web development, having been convinced of the merits of RESTful applications that aren’t necessarily event-driven. It’s funny how things tend to converge over time. I’m open minded, and I can already tell that React is a powerful tool, so I’m looking forward to learning it and I know we can produce a great visual MongoDB query builder for this project.

by Matt at October 19, 2016 08:38 PM

October 18, 2016

Matt Welke

Investigating MongoDB Aggregation

Today we decided that it would be a good idea to be ambitious and go for my original “plan A” front end API. This is the one that would dynamically read the schema as the user builds queries using a GUI to guide them along and create any MongoDB query possible. Investigating this route is a good idea because it would provide with a tool that would stand the test of time. It’s a much better long term solution than a tool that only has a handful of queries we’ve created today based on the current database schema.

The main challenge with creating this front end API is learning what MongoDB code would map to the things the user would click on and the things the user would type in. MongoDB calls this their “Aggregation Framework”. Because MongoDB lacks many of the features that relational databases have (like joins), they have provided their own ways of doing complex analytical queries. For example:

  • SQL has “SELECT” to restrict what you get back from what you match. MongoDB has “$project” for this.
  • SQL has “WHERE” to restrict what you’re matching. MongoDB has “$match” for this.
  • SQL has “INNER JOIN” to allow to relate a dataset with another in a normalized system (“show me the users who visited at least 5 times today”). MongoDB can embed the hits into the users in this example with “$lookup”.

From above, it’s clear that it’s kind of simple how things line up… but the complexity comes from arranging the MongoDB query the right way. It’s arranged differently compared to the SQL queries we’re used to writing. And even though we can use $lookup in place of some joins, memory usage is a concern. MongoDB has a “working set” which is the data it’s looking at at a particular moment during the execution of a query. In the above example, finding hits and users and relating them, all of the hits and all of the users might have to be held in memory. With the kind of scale we have, this may be a problem. Additionally, you can’t shard your database when you use $lookup, which is key to use scaling our solution so that it suits long term.

We have more research to do to make sure what we do will be a good, long term solution, and that we even have time to implement this complex querying and data analysis tool.

by Matt at October 18, 2016 09:33 PM

October 17, 2016

Matt Welke

UI Mock Ups

Today I continued work on my front end API user interface mock ups. In my previous blog post I described to ideas. One was a very heavy duty UI that would adapt to our MongoDB schema, and the other was a simpler collection of queries arranged for them to use. I worked on the second option today. I figured the best way to show the mock up would be to get a quick little prototype running. So I made it with Bootstrap and JavaScript. It’ll end up doing a lot in the front end with JavaScript anyways (using jQuery to manipulate the DOM), and Bootstrap is quick and easy to use so it didn’t slow down the design process. I have things that do things when you click them, and a simple query filtering system almost working. This will work quite well to show my team lead the idea and maybe show the staff soon too for some feedback if they ask to see it soon.

by Matt at October 17, 2016 09:33 PM

Henrique Coelho

Making a User Control module for DotNetNuke

I had to take a small break from JavaScript and Node.js in the past few days and work with ASP.NET: our client module in the front-end needed some information from the server, but there was no way we could retrieve it from the DOM, so I had to develop a small User Control module for DotNetNuke (an ASP.NET framework) that passes this information to our module in the frontend. A User Control module is a module that can be embedded in the skin of the website, so it can be called for every page.

This is how it works: in the backend, we get the information, put it in a stringified JSON, and include it in the DOM of the webpage. For instance(I had to use square brackets instead of angle brackets because WordPress was breaking my page - again, use your imagination):

<%=getServerInfo(); %>

String getServerInfo() {
    return "[script] = 1;[/script]";

With this, the window instance will have an "info" object, which can be accessed by the JavaScript in the frontend:

    <script> = 1;</script>

This is very simple, but my C# skills were a bit rusty, and I've never worked with DotNetNuke before. These are the files (with pseudocode, of course) that I had to create in order to get a module running:

<%@ Control Language="C#" CodeBehind="View.ascx.cs" Inherits="MyClass.Controller" %>
[asp:Literal ID="placeholder" runat="server"][/asp:Literal]

The file above is responsible for creating the View of the module, as well as linking it to its "codebehind" (the logic behind the view), specifying which classes it implements, and making a placeholder for our JSON to be inserted.

using ...;

namespace MyClass
    public partial class Controller : System.Web.UI.UserControl
        protected void Page_Load(object sender, EventArgs e)
            placeholder.Text = "";

The file above is the "codebehind" of the module - as soon as the page loads, it will replace our placeholder.

    <package name="View" type="SkinObject">
      <friendlyName>My Module</friendlyName>
        <component type="SkinObject">
        <component type="Assembly">
        <component type="File">

The file above is responsible for defining the module: it tells DotNetNuke what it is.

<%@ Register TagPrefix="dnn" TagName="MYCLASS" Src="~/DesktopModules/MyClass/View.ascx" %>
<dnn:MYCLASS ID="dnnMyClass" runat="server" />

The snippet above is inserted in the skin of the portal, so it can be called in all pages.


After the module is compiled into a DLL, it can be used by the website.

by Henrique Salvadori Coelho at October 17, 2016 02:00 PM

October 14, 2016

Matt Welke

Beginning Work on the Front End API

Today was mostly a research-oriented day for me. I need to look into creating a front end API with a nice user interface for to use to query our back end API. I have two main ideas right now:

A) A very powerful UI that would learn about the schema as the user clicks on things to give them buttons etc for every possible Mongo query they could do. For example, if they clicked “Hit”, they’d get a few buttons popping up below listing the attributes or “Session” because there’s a many-to-one relationship between Hit and Session. This thing would learn forever, and always work for them, even as they add more data to the schema.

B) A simple UI with as many queries that we can come up with as possible based on our current schema. They would be sorted into categories so it would make sense, but because it’s just a set of queries we’ve prepared, it wouldn’t grow in the future.

A is extremely hard. I practiced making a mock up in HTML (no functionality) and I still couldn’t even wrap my mind around how it would all connect together. I looked online for some open source libraries in case anybody had already created this “visual query” tool, and I couldn’t find any.

B is more feasible. My team mate agrees with me so far with this.

I began looking into an ODM (object document mapper) called Mongoose to make our lives easier as we make this. They let you use programming-like code to traverse the cardinal relationships among your models. Instead of making a query to get the id of a certain session, and then finding all hits based on that id (two separate queries, lots of code), we can do something “findTheSession().hits”. Done. Boom.

I love these mappers. I’ve used ActiveRecord with Rails and Entity Framework with .NET, so this feels familiar. Once I get passed the growing pains of learning how to set it up, I think it will speed us up a lot. We didn’t need to use an ODM for the back end API because it wasn’t doing any reading of our database, just taking in simple objects and inserting them into the database.

Over the next few days I’m going to continue playing with Mongoose, but also brainstorm types of queries to help analyze their data, and think about the UI we’ll create.

by Matt at October 14, 2016 10:30 PM

Mohamed Baig


As I get more and more experience with NodeJs and ExpressJs, I find myself looking at production settings and best practices more and more.

by mbbaig at October 14, 2016 08:16 PM

October 13, 2016

Matt Welke

We’ve Got the Data, Now What?

Our demo went well!’s staff seems impressed by our work so far. There will be more refinement to do on our back end API in charge of receiving visitor data, but we can also start thinking about the other half of our project at this point. We will need to present the information we’ve collected in a useful way for them to use to learn about their users. We also will need to link the data into their existing system. If their existing system, which uses Elasticsearch as a recommender engine for example, can use our data to augment their recommendations, they will be able to grow.

This means a front end API (which we’re also referring to as the GET API) which is able to query our back end API and send responses in two main forms. The first form will likely be JSON, or some other useful, stable, powerful, machine-readable language. This can be used by Elasticsearch or the rest of their system to augment their recommendations. The second form will likely be the V in MVC. It will be an HTML view or a React or Angular web app. It will be something with an interface they can use to build queries and see data presented in a nice clean, pretty way. My role on the team right now is to investigate creating this front end API while my team mate looks into ways to make our back end API collect even more useful information (likely involving hooks with their CMS).

So far this has been a pretty interesting project, but really, we’re only getting started.

by Matt at October 13, 2016 09:21 PM

October 12, 2016

Matt Welke

Demo is Ready!

Today we continued to prepare our demo for We finally got it running on AWS (Amazon Web Services), using their ECS (Elastic Container Service). It was tricky getting a good mental grip on how everything fits together on AWS, but when you get used to it, it isn’t so bad.

Our setup is basically an ECS cluster (of containers) consisting of two Docker containers (one for our Mongo database and one for our Node.js back end API), set up with one ECS-prepared EC2 (Elastic Compute Cloud) instance to run it all. The instance uses EBS (Elastic Block Storage) to store the data so that it’s persisted when the instances are stopped.

In English… Our apps are running in virtual machines inside a virtual machine, and it’s set up in a way that it will reboot them automatically if they crash, and they can be scaled too in the future.

The demo will consist of using a web page with some dummy content with our front end script (which looks for events like scrolling to the bottom of the page) to show the staff tomorrow that it logs the data when we do those events. We have a tool called adminMongo, which can be best-described as the MongoDB equivalent of phpAdmin. We’ll use it to show the data in the database after we’ve clicked on a few things and scrolled around. And it’s all stored in the cloud.🙂

AWS is a beast, but once tamed, it’s a powerful beast.

p.s. I should make an Amazon acronym cheatsheet sometime. Perhaps I’ll call it the AAC.

by Matt at October 12, 2016 09:42 PM

Henrique Coelho

Deployment of containers on AWS

We spent the past days reading about AWS in order to deploy the 2 containers we developed: one container only has a "Dockerfile" with our database (MongoDB) and the other container has the API to insert data into this database (Node.js). In this post, I'll describe some things I would like to have known before I started the deployment; it was a very frustrating process, but after you learn how everything works, it becomes really easy.

First of all, the deployment is not a linear process: you will have to know some details about your application before you start the process; these details, however, will not be obvious for you if you haven't used AWS before: this is one of the reasons why it was a slow, painful process for us.

Looking back, I think the first step to deploy these containers is to upload the repositories, even though they are not properly configured yet: you need the repositories there to have a better perspective on what to do. So, first step: push the docker images to EC2 Container Registry. The process is simple, it only takes 4 steps (3 steps, after the first push), which are just copying and pasting commands in the command line.

After the containers are uploaded, we should choose a machine that will run Docker with the containers, and here is the catch: we need to choose a machine that is already optimized to be used for Container Service, otherwise it will not be a valid Container Instance and you would have to configure it yourself. To find machines that are optimized for ECS, we search for "ecs" on the custom instances. After the machine was chosen, we select the other specifications we'll need, such as storage, IPs, and so on - but nothing too special here.

With the right machine instance, a default Cluster will be created in the Container Registry. Here is the interesting part: the cluster is a set of services, which are responsible for (re)starting a set of tasks, which are groups of docker containers to be used by the machine. Instead of starting from the service, we now should start from the task, adding its containers and work back to the service - then the deployment will be complete.

To create a task is simple: we give it a name and a list of the repositories (the ones that we uploaded in the beginning), but we also have to set how the containers are going to interact with each other and with the devices outside. There are two special settings we had to do:

1- The MongoDB container should be visible for the API. This can be done by linking them together: on the container for the API, we map the name of the database container to an alias (for instance: Mongo:MongoContainer); with this, the container of the API will receive some environment variables, such as MONGOCONTAINER_PORT, with the address and port of the other container. We can use this to make the API connect to the database (and the source code would probably have to be modified to use this port).

2- The MongoDB container should use an external drive for storage, otherwise, its data will be lost when the container is restarted. For this, we map the external directory (that we want the data to be stored into) to an internal directory, which is used by the database (for instance, /usr/mongodb:/mongo/db). Since we wanted to use an external device, we also had to make sure the device would be mounted when the machine was started.

After the task is set up, we make the service for the cluster: the service, in this case, contains the only task that we made. With the service properly configured, it will start and restart the tasks automatically: the deployment should now be ready.

It's easy to understand why we spent so much time trying to make it work (giving the amount of details and steps), but looking back, this modularity makes a lot of sense. The learning curve is very steep, but I am very impressed by how powerful this service is. I am very inclined to start using it for deploying my own projects.

by Henrique Salvadori Coelho at October 12, 2016 02:00 PM

Laily Ajellu

Time Limits - Get 'A' Certified

Time Limits Should be Adjustable

People need varying times to complete tasks.
Follow these guidelines to give enough time to your users, and make your Web App 'A' Certified.

UX examples of where its needed:

  • Situation:
    Text is scrolling across the screen
    Add a pause button

  • Situation:
    User is taking an Online test
    A moderator should be able to extend the time to complete the test

  • Situation:
    User is using Online Banking
    If a the user is inactive for a while, give the user 20 seconds to press any key to extend the session.

  • Situation:
    User is trying to by a concert ticket online (With limited tickets)
    Warns user 20 seconds before the time limit is going to end. Also let the user input all of information and banking information before the time limit starts. It would be unfair to allow time extensions in this case.

The Checklist - One of the below should be true:

  1. Turn off:
    The user can to turn off the time limit before it even starts
  2. Adjust:
    The user can change the time limit by at least ten times the default length
    (It's 10 extension times because of clinical experience.)

    Essential Exception:
    Where the time limit is essential. Eg. granting double time on a test
    If it would invalidate the outcome to give all users the ability to change the time limit, a moderator must be able to change time for them (Eg. the teacher)
  3. Extend:
    If the time limit is ending, give the user 20 sec to do something simple (like pressing the space bar). The user should be able to extend the time limit at least 10 times.

Other Exceptions

  1. Real-time Exception:
    The time limit is a required part (eg, an auction) And it would be unfair to give more time to some and not others
  2. 20 Hour Exception:
    If the default time limit is longer than 20 hours, you don't have to have any of the above features.

Who this Benefits:

  • People who are reading/listening to content that is not in their first language
  • People with physical disabilities often need more time to react
  • People with low vision need more time to locate things on screen and to read
  • People with blindness using screen readers may need more time to understand screen layouts
  • People who watch sign language interpreters

References: Time Limits Required Behaviours

by Laily Ajellu ( at October 12, 2016 03:21 AM

October 11, 2016

Matt Welke

Continuing Work on the Demo

Today we continued work on getting our application hosted on Amazon Web Services (AWS). This ended up being a lot more challenging than we expected. It’s difficult to navigate the system when we’re restricted by needing to manually get permissions (Amazon’s way of limiting what some users can do) from to start or stop services, create instances, etc. It is good practice, of course, to be conservative with these permissions. We empathize with them wanting to do things this way.

Our main problems seem to be with Docker crashing when it begins to run on the AWS instance we created. We also sometimes can’t even get the instance to be registered with the container service. We will continue work on this tomorrow. If for some reason we can’t get the Docker setup working before our demo on Thursday, we will likely resort to just manually using instances without any container service managing them.

by Matt at October 11, 2016 09:32 PM

Mohamed Baig


A process for small teams for managing their git repositories.

by mbbaig at October 11, 2016 05:08 PM

October 08, 2016

Andrew Smith

On the reception and detection of pseudo-profound bullshit

I first thought this was a joke, but it isn’t. Someone actually ran a study about how well people can detect bullshit. And the result is a wonderfully-written paper (which I’ll copy here, because the original will probably get its URL changed).

I haven’t finished reading it yet (my attention span isn’t what it used to be) but I’m posting it here because it’s clearly an impressive piece of work and is so relevant today, when it seems that you bump into idiots professing the truth no matter where you turn your ears.

Thanks to the authors for getting it done! I’m sure they had to jump through many hoops to have their project approved.

by Andrew Smith at October 08, 2016 09:38 AM

October 07, 2016

Matt Welke

Preparing for the Demo

Today I put some finishing touches on the client code before looking into hosting the back end push API and the client code as we prepare for our Tuesday demo.

We unfortunately didn’t have enough permissions on our access to’s Amazon Web Services account right now, so I couldn’t get it fully hosted. But I began to investigate this on my own, using my own AWS account’s free tier. I was able to set up an IAM user for myself, give myself the needed permissions (which ended up being quite a bit… Docker containers on AWS need a lot of their services to work). I wasn’t quite able to get the task working for using ECS (Elastic Container Service). We’ll look at it in the morning next work day to get it running. Then, once we get the proper permissions from, we can get it running on theirs, knowing exactly what to do.

For the client side of things, we were able to access their CMS server, and add some JavaScript that would run on all pages. It was just a development environment there, so we don’t need to worry about breaking things for millions of visitors (phew…). That was pretty simple, so I’m betting the AWS is going to be the trickier part of this.

My team mate worked on the code while I investigated deployment like I described above. He secured the connections with HTTPS and WSS (WebSocket Secure) instead of our old HTTP and WS.

by Matt at October 07, 2016 09:27 PM

October 06, 2016

Matt Welke

I See You, User

Today we worked on the client side/browser hook code. We paid close attention to making sure it wouldn’t block the browser. We’re preparing a ready version to link to the main system (or at least a dev version of it) completely cloud-hosted, so we wanted to code these best practices ahead of time to make sure what we create is good enough to use in production.

I specifically worked on creating client side event handles that would log when a user begins to fill out a form, and also would log when that user submits the form. One thing we have to be careful about is treating each group of form inputs as their own entity. If they have more than form on the page (perhaps an email sign up, and a comment area), we want to track when they start to fill these out and/or submit them separately. See the example below. We set the event handle for “click” for radio buttons and checkboxes, and “onkey” for text inputs, so that it’s able to track them all:

// Find all form elements
const forms = document.getElementsByTagName('form');

// For each of them...
for (let i = 0; i < forms.length; i++) {
    // Each set of inputs get their own group of event handles
    // (inputs for one form don't interfere with inputs of another form)
    // ...prepare an array of child input elements
    const children = forms[i].children;

    // filter for just inputs
    const inputs = [];
    for (let i = 0; i < children.length; i++) {
        const child = children[i];
        if (child.tagName.toLowerCase() === 'input') {

    // Listener only fires for first form element changing
    let triggered = false;

    // For each of them, assign an onchange event listener
    inputs.forEach(input => {
        switch (input.type) {

            case 'text':
                input.addEventListener('keydown', () => {
                    if (!triggered) {
                        triggered = true;

                        // log the data

            case 'checkbox':
            case 'radio':
                input.addEventListener('click', () => {
                    if (!triggered) {
                        triggered = true;

                        // log the data

by Matt at October 06, 2016 09:16 PM

Henrique Coelho

JavaScript and Non-blocking functions

One of the most interesting features of JavaScript must be its event-driven and asynchronous nature: the operations can, but don't need to block the next operation from being executed before the current one is done. For instance, the following snipped follows a very logical sequence:


// The output is: 1 2 3 4

However, we can make that these functions are executed in a different sequence by setting timeouts for them:

setTimeout(() => console.log(1), 75);
setTimeout(() => console.log(2), 0);
setTimeout(() => console.log(3), 50);
setTimeout(() => console.log(4), 25);

// The output is: 2 4 3 1

Why is this useful? Well, if we have operations that are costly to perform, but don't have a high priority. Normally, they would block other operations, even though they are more important and don't need the former:

// Not very important operation
for (let i = 0; i < 1000000000; i++);
console.log('Not very important operation is done!');

// Very important operations
console.log('Super important operation');
console.log('This operation is also very important');

/* Output:
Not very important operation is done!
Super important operation
This operation is also very important

In this case, we could simply move the costly and unimportant operation to the end of the file (since it is not used by anything else, after all), but real life is not that easy: although we should prioritise the interaction with the user while leaving costly and unimportant operations for last, the interactions with the user are not predictable: we cannot create a logical sequence that attends to all the cases. However, we can use the setTimeout function and set a 0 (zero) timeout for a procedure: the operation will be sent to the back of the queue of operations to perform. Like in this case:

// Not very important operation
setTimeout(() => {
    for (let i = 0; i < 1000000000; i++);
    console.log('Not very important operation is done!');
}, 0);

// Very important operations
console.log('Super important operation');
console.log('This operation is also very important');

/* Output:
Super important operation
This operation is also very important
Not very important operation is done!

Having this in mind, I started experimenting on what is the best combination to create a script that does the most vital (and cheap) operations as soon as possible, but leaves the ones that would affect the user experience for last.

First, I made a simple webpage like this one (I replaced the angle brackets with square brackets because Wordpress was screwing up with my page. Just use your imagination):

<script src="1.js"></script>
<div id="overall">
  ...around 100,000 auto-generated HTML elements here...
<script src="2.js"></script>

alert('Script 1 ' + document.getElementById('overall').childNodes.length);
for (let a = 0; a < 1000000000; a++);
alert('Script 1 done');

alert('Script 2 ' + document.getElementById('overall').childNodes.length);
for (let b = 0; b < 1000000000; b++);
alert('Script 2 done');

(I will change the file 1.js during this post, but 2.js and page.html will stay the same)

The idea is simple: a very heavy webpage with a script in the header, and a script at the end of the DOM; these scripts are just alerts saying how many elements are there in the DOM. This was the order of what happened while loading the page:

1- Script 1, 0 (alert in a blank page) 2- A few seconds of a blank page 3- Script 1 done 4- Dom is loaded 5- Script 2, 100002 (alert in a fully-loaded page) 6- A few seconds of loading, but with the page fully functional 7- Script 2 done

* the first childNode.length actually gives an error because childNodes wasn't even defined yet, but the moral is: the DOM is not loaded

This is why it is recommended to put your script at the end of the page: it will not block your DOM from rendering. On top of that, if you are planning to do some DOM manipulation, you have to wait for it anyway, otherwise there won't be anything to manipulate (duh).

However, it has a drawback: your script will only be called after the DOM is already rendered. For our case, we want to know how much time it took for the DOM to load, so this is not an acceptable alternative. What we can do in this case is use an event to see when the page gets loaded, and then we execute the script:


alert('Doing some very fast and important work here...');
document.addEventListener("DOMContentLoaded", function () { 
  alert('Script 1 ' + document.getElementById('overall').childNodes.length); 
  for (let a = 0; a < 1000000000; a++); 
  alert('Script 1 done'); 

With this, the orders of executions become:

1- Doing some very fast and important work here 2- Dom is loaded 3- Script 2, 100002 (alert in a fully-loaded page) 4- A few seconds of loading, but with the page fully functional 5- Script 2 is done 6- Script 1, 100002 (alert in a fully-loaded page) 7- A few seconds of loading, but with the page fully functional 8- Script 1 done

Now another problem arrives: what if there are several costly, but less important functions inside that one? Say this is our 1.js now:


alert('Doing some very fast and important work here...');
document.addEventListener("DOMContentLoaded", function () {
  alert('Doing not very important operation...');
  for (let a = 0; a < 1000000000; a++);
  alert('Not very important operation done');

  alert('Super important operation');
  alert('This operation is also very important');

The order of operations would be:

1- Doing some very fast and important work here 2- Dom is loaded 3- Script 2, 100002 (alert in a fully-loaded page) 4- A few seconds of loading, but with the page fully functional 5- Script 2 done 6- Doing not very important operation... 7- Not very important operation done 8- Super important operation 9- This operation is also very important

Can we send the "Not very important operation" to the back of the queue again? Yes we can. By using the setTimeout function I described before:

alert('Doing some very fast and important work here...');
document.addEventListener("DOMContentLoaded", function () {
  setTimeout(() => {
    alert('Doing not very important operation...');
    for (let a = 0; a < 1000000000; a++);
    alert('Not very important operation done');
  }, 0);

  alert('Super important operation');
  alert('This operation is also very important');

This is the order of operations we would get:

1- Doing some very fast and important work here 2- Dom is loaded 3- Script 2, 100002 (alert in a fully-loaded page) 4- A few seconds of loading, but with the page fully functional 5- Script 2 done 6- Super important operation 7- This operation is also very important 8- Doing not very important operation... 9- Not very important operation done

By using some timeouts and some events, I'm confident we will be able to make a client module that will execute at the right time: without interfering in the user experience, but still doing the right operations in the right time.

by Henrique Salvadori Coelho at October 06, 2016 02:00 PM

Laily Ajellu

Accessible Websites are Like Essays - A Memory Aid

Creating accessible pages is like writing an essay, you must have all the necessary organizational structures.

Both Must Have:

  1. Main headings
  2. Sub-headings
  3. Text for all explanations of images and meaningful color usage in your UI. (text must be screen reader only, or screen reader and visually available)
  4. Labels positioned to maximize predictability of relationships
  5. Page numbers for pdf documents
Content Structure Separation Programmatic

by Laily Ajellu ( at October 06, 2016 02:09 AM