Planet CDOT

September 26, 2016

Henrique Coelho

Cookies, Third-Party Cookies, and Local/Session Storage

In this post I will make a brief introduction to cookies, but more importantly, I want to talk about third-party cookies: What are they? Where do they live? How are they born? What do they eat? And what are the alternatives?

Due to the nature of HTTP (based request and response), we don’t really have a good way to store sessions (have a fixed, persistent memory between visits in a webpage), this is solved by using cookies: the website creates a text file on the client’s computer, and this cookie can be accessed again by the same website. This solves the problem of having to ask the client’s username and password for every single page, for instance, or storing information about his “shopping cart” (when not using databases). There is also a security feature: a website cannot access cookies that it didn’t create, in other words, the cookie is only available to the domain that created it.

downloadAccording to scientists, this is how a cookie looks like

HTTP Cookies first appeared in the Mosaic Netscape navigator (the first version of Netscape), being followed by Internet Explorer 2. They can both be set on the server-side (with PHP, for instance) or on the client-side, using JavaScript. There are several types of cookies:

  • Session cookies are deleted when the browser is closed
  • Persistent cookies are not deleted when the browser is closed, but expire after a specific time
  • Secure cookies can only be transmitted over HTTPS
  • HttpOnly cookies cannot be accessed on the client side (JavaScript)
  • SameSite cookies can only be sent when originating from the same domain as the target domain
  • Supercookie is a cookie with a “top level” domain, such as .com and are accessible to all websites within that domain
  • Zombie cookie gets automatically recreated after being deleted
  • Third-party cookies (I will talk about them now)

Third-party cookies

Cookies are a lot more powerful than they seem, despite the obvious limitation of only being available to the domain: let’s suppose I have a website called and I decide to put some ads in other websites:, and Instead of simply offering these websites a static image with my advertisement, I could pass a PHP file that generates an image:

<a href="..."><img src="" /></a>

This PHP file would just generate an image dynamically, but it could do more: it could send JavaScript to the clients and set cookies in the websites that I announced. When someone access the website, my script could detect the website address ( using JavaScript (window.location), and record this information in a cookie. When the user navigates to other websites with my ads, such as or, my script would repeat the process. This information would be accessible to me: I would know exactly which websites the user visited.

third_party_cookieCourtesy from Wikipedia


Problems with privacy and blocking

Needless to say, people who are slightly concerned with privacy do not like cookies; especially for non-techie people, this is a very convenient witch to hunt – I am surprised that magazines and newspapers are not abusing it. Most modern web browsers can block third-party cookies, which is a concern if you are planning a service that relies entirely on this feature.

It’s not easy to find statistics about cookie usage, but I got one from Gibson Research Corporation:

browserusageBrowser usage
browsercookiesCookie configuration by browser, where: FP = First-Party cookie and TP = Third-Party cookie.

It seems that Third-Party cookies are disabled on Safari by default, while other web browsers are also getting more strict about them. Despite still being used, it seems that this practice is reaching a dead-end. On top of that, cookies are also not being able of tracking users across different devices.


Alternative: Local/Session Storage

Apparently, cookies are dying. It may be a little too early to say this, but we don’t want to create something that will be obsolete in 5 years, so it is a good idea to plan ahead. What is the future, then?

Probably, the most promising tool is called Local and Session Storage, it also seems to be supported in the newest browsers:

supportCompatibility for Local and Session storage

The way Local and Session storage work is very simple: they behave as a little database in the browser, storing key-value pairs of plain text. While Local Storage is persistent (does not get deleted), Session storage lasts only while the browser is open (it is deleted when the browser is closed, but not when the page is refreshed). It is great for storing persistent and non-sensitive data, but they are not accessible from the server: the storage is only accessible from the client-side – if the server must have access to it, it must be sent manually.

Using the local storage, it is possible to build a similar system to Third-Party cookies, with methods similar to the ones I explained. Here is an article on how to do this: Cross Domain Localstorage.



by henrique coelho at September 26, 2016 02:38 PM

September 25, 2016

Matt Welke

MongoDB Pretty Dynamite After All

Alright, that’s it for the DynamoDB puns, I swear.🙂

I feel like on Friday I finally had a breakthrough for our project while investigating technologies to use for the project. My team mate built a prototype of our back end and the front end we will need to create in a few months, with Node.js and DynamoDB for the back end, so that we could demo it after the weekend. Meanwhile, my job for the day was to take one more stab at finding an open source alternative to DynamoDB. I looked further into using MongoDB.

MongoDB Atlas is limited to 10 TB if you use it as a service. This isn’t good enough, but luckily, you can use Amazon Elastic Cloud Compute (EC2) to provision your own instances to run your web apps, and this includes choosing instances that have up to 48 TB of cheap, magnetic storage per instance. Eureka! We were so obsessed with looking at X-as-a-Service that we forgot about the DIY options. We can still use Docker to make things easier for the client, reducing maintenance and tech knowledge needed to migrate the system we build in the future. So this may be an amazing sweet spot: An EC2 instance running our Node.js back end in a Docker container, and another EC2 instance with tons of storage running MongoDB in a Docker container, to accept and store the data. The latter can scale to multiple EC2 instances all running MongoDB (sharded) to store more and more data as grows. And it’s all open source.🙂

After this discovery, I created a small prototype of a Node.js and MongoDB back end, running in two separate Docker containers locally, to get a sense of how it would fit together. Docker ended up being very intuitive. I think it’s going to be a popular tool over the next few years. My prototype worked. My team mate’s prototype worked. Mission accomplished!

by Matt at September 25, 2016 08:17 PM

Andrew Smith

DIY portable socket organizer

I wanted to build this for a long time. I hate looking through random boxes for a socket that’s 1mm larger or smaller than another socket which almost fits the nut I’m taking off :)


Part of the problem is that all the large socket sets you can buy have some sizes missing. Even the expensive socket sets. So it took me a while to assemble this set. It includes:

  • Most of the black short and deep sockets are from the original set, which I bought used (I was told it was an impact socket set but now I think that was a lie).
  • A couple of the short sockets are Mastercraft Maximum.
  • One of the short sockets and three of the deep sockets are from Princess Auto.
  • One of the deep sockets is from Amazon.

Crazy, yeah? I also have in here:

  • A nice short 3/8″ Ingersoll Rand air ratchet with a swivel adapter from Lowe’s. My IR impact wrench wouldn’t fit in here with everything else.
  • A wobble extension bar set from Canadian Tire.
  • A full adapter set from Princess Auto.
  • And three ratchets and a screwdriver bit set from I don’t even remember where.

The box I found at the curb – someone threw it out. It was lined with foam with indentations for what looked like ceremonial spear heads, is my best bet. Something fancy or another that broke or was no longer loved. But it worked great for this purpose.

The holes in the polystyrene (not anything good, just from packing material) were cut with a fret saw. The grooves on the wood I cut with a mitre saw because I didn’t yet have a table saw at that time. The dado on the big piece of wood was also made with the mitre saw (mine has a depth adjustment).

Everything except the big wood pieces is held together with hot glue. I was originally planning to make this modular so that I can remove and insert pieces as time goes by. I had to give up on that idea – it was too complicated, but I figured hot glue will be good enough anyway.

I just finished this socket organizer but it seems like it’s going to hold. The flimsiest part is the polystyrene, but I wasn’t going to spend 20$ on a sheet of better quality stuff from an arts supplier and I didn’t have enough drill bits to match every socket size in wood. I need a bit of friction so they don’t fall out.

Looks great, works great. Weighs a lot but it’s a good solid box that I don’t intend to bang around. Now I just have to figure out a good, portable way to organize my wrench set :)

by Andrew Smith at September 25, 2016 05:43 PM

September 24, 2016

Eric Brauer

First Post!

Hello, this is my first post for this work blog. I’ve been at CDOT for nine months, but only now am I attempting that feat that some call: ‘blogging on a consistent basis.’ Many have tried to reach that summit, many have failed. (Ha!)

In the coming weeks, I hope to have some interesting updates on some of our work at CDOT. I’ll confess to sometimes being at a loss in what to write about/talk about when pressed for content. The fantastic thing about working at CDOT is that I’m constantly encountering new concepts and technologies. Yet I’m all too aware that for many among the prospective audience, these concepts are familiar. I’ll try not to be tedious..!

As for role at CDOT: I’m coming from a computer engineering program, which basically means that I end up being the go-to guy whenever a computer is being asked to place nice with external bursts of electrical current.  We always hope that these are useful bursts of electrical current. Hopefully they are coming from a sensor, or a switch, or hopefully they are going to a light or external device.

I expect to be back in a week’s time with something interesting to discuss.

by ebraublog at September 24, 2016 02:50 PM

September 23, 2016

Matt Welke

MongoDB kinda Dynamite

Well, where to start… This was a busy few days!

To start with, I should apologize for the previous blog post bashing DynamoDB before Amazon gets sad. It turns out my team mate and I were completely wrong about its abilities. We jumped to the conclusion that it could not support queries without specifying the primary key. It turns out it can. And it does this through the use of indexes that you manually specify (which you can do after the table has been created too). These indexes aren’t quite like indexes in relational databases though. They’re hashes. And given DynamoDB’s price competitiveness, we’re pretty happy looking at it as an option. Tl;dr we can use DynamoDB basically just as feasibly as any other NoSQL database. For the non-tl;dr, see my team mate’s blog post where he gets into the gritty details about DynamoDB and hash indexes.

Now on to the rest! We’re still not ready to start building because we’re still stuck at the stage where we decide on technology. Technologically, DynamoDB is good enough for us. However, I don’t work at the Seneca College Centre for Development of Proprietary Technology. We breath open source. If we can find an open source alternative to DynamoDB that is comfortable with, we can avoid coupling them too tightly to a proprietary technology. Freedom is nice.

So open source is good, NoSQL is necessary. All aboard the hype train, next stop MongoDB? I’ve known about MongoDB for years, and to my knowledge, it’s past its hyped up evangelized stage and if it’s still around, it must be good. I researched it. Turns out it’s massively scale-able, provided you have the metal for it, or if you want to go on the cloud, the database’s creator even offers MongoDB Atlas, a Database-as-a-Service (DBaas). Atlas sounds like what we need, but it better be scaleable. That’s the problem we keep running into with these cloud services. They’re convenient but limited. The main reason we need NoSQL at this point is because all of the relational database services on Amazon Web Services have a 6 TB limit, and we know we’ll need much more space than that if wants to run our creation for years to come. From what I gathered reading Atlas’s whitepaper and website pricing info, we should be able to get at least 10 TB from their cloud service. Better… But I sent them an email requesting more information and guidance before we take it too seriously.

Now here’s where the drama starts again. I researched how accurate MongoDB was. That is, is it just as strong as a traditional relational database? Is it ACID-compliant? Will it explode?



Our database for the project, maybe.

Yes. MongoDB’s criticism after its initial hype was warranted after all. It is absolutely garbage for relational data, *if* you need that data to be accurate. It isn’t ACID-complaint on the transactional level, so if you have a power outage, say goodbye to the usefulness of your data. Now for our project, this may be okay, because we don’t need the data to be super accurate. If things get disjointed after being denormalized (as anything must be to fit into a NoSQL database), it just slightly reduces the amount of useful analytics data we would have to work with. And having 99.99% of our analytics data that we mined available to us is still completely fine. However, I would definitely not use MongoDB in the real world for anything involving important information, especially e-commerce etc. If this paragraph went over your head or bored you, you’d probably enjoy reading the use case blog post I read to discover this. In it, American developer Sarah Mei describes how using MongoDB during the launch of Diaspora almost destroyed the project, and why they ultimately had to retreat back to relational databases.

So what does this mean? We’re closer, but we need to triple check the safety of using MongoDB for this project (and all NoSQL databases like DynamoDB for that matter!) before finally getting started building.


by Matt at September 23, 2016 02:37 AM

September 21, 2016

Henrique Coelho

Getting acquainted with DynamoDB

In my previous post about dynamodb I explained some limitations and quirks of DynamoDB, but I feel I focused too much on the negative sides, so this time I will focus on why it is actually a good database and how to avoid its limitations.

DynamoDB is not designed to be as flexible as common SQL databases, regarding to making Joins, selecting anything you want, and making arbitrary indexes: it is designed to handle big data (hundreds of gigabytes or terabytes), so it is natural that operations like “select all the movies from 1993, 1995 and 1998” would be discouraged – the operation would just be too costly. You can still do them, but it would involve scanning the whole database and filtering the results. Having this in mind, DynamoDB appears to be useful only if you are working with Big Data, if not, you’ll probably be better with a more usual database.

So, what is the deal with queries with Secondary Indexes, exactly (I mentioned them in my previous post)? To explain this, it is good to understand how indexes work for DynamoDB, this way we can understand why they are so important.

Suppose we have this table, where id is a primary key:

id (PK) title year category
 1  The Godfather  1972 1
 2  GoldenEye  1995 1
 3  Pirates of Silicon Valley  1999 2
 4  The Imitation Game  2014 2

In this case, we could search “movies which the id is 3”, but not movies which the id is less than 3, more than 3, different than 3, or between 1 and 3 – this is because the primary key must always be a hash. Although it is a number, due to the way that this ID gets indexed (probably in a binary tree) makes it impossible to be searched by criteria that demand sorting, it can only be an exact value.

Now, I already explained that in order to make queries, we always need to use the primary key. This is true, but not entirely: you can create “secondary primary keys” (global secondary indexes), so you can search based on them, and for secondary indexes, they do not have to be unique. I will explain what are “local secondary indexes” later, for now I’ll focus on global indexes: we could make a global secondary index in the category of the movie:

id (PK) title year category (GSIH)
 1  The Godfather  1972 1
 2  GoldenEye  1995 1
 3  Pirates of Silicon Valley  1999 2
 4  The Imitation Game  2014 2

Where GSIH = Global secondary index, hash. Indexes need a name, so I will call this one “CategoryIndex”.

Now that we have a secondary index, we can use it to make queries:

TableName : "Movies",
IndexName: "CategoryIndex",
ProjectionExpression:"id, title, year",
KeyConditionExpression: "#cat = :v",
ExpressionAttributeNames: { "#cat": "category" },
ExpressionAttributeValues: { ":v": 2 }

This will get us the movie The Godfather and GoldenEye. The attribute “category”, however, is still a hash, and this means we can only search it with absolute values.

Not very intuitively, indexes can actually have 2 fields, the second one being optional: a hash (in the examples I showed, id and category), and a range. Ranges are stored sorted, meaning that we can perform searches with operators such as larger than, smaller than, between, etc – but, you still need to use the hash in the query. For instance, if we wanted to get the movies from category 2 from 1995 to 2005, we could turn the attribute year into a range, belonging to the index CategoryIndex:

id (PK) title year (GSIR) category (GSIH)
 1  The Godfather  1972 1
 2  GoldenEye  1995 1
 3  Pirates of Silicon Valley  1999 2
 4  The Imitation Game  2014 2

Where GSIH = Global secondary index hash, and GSIR = Global secondary index range.

TableName : "Movies",
IndexName: "CategoryIndex",
ProjectionExpression:"id, title, year",
KeyConditionExpression: "#cat = :v and #ye between :y and :z",
ExpressionAttributeNames: { "#cat": "category", "#ye": "year" },
ExpressionAttributeValues: { ":v": 2, ":y": 1995, ":z": 2005 }

This would give us the movie Pirates of Sillicon Valley. Global secondary indexes can be created/deleted whenever you want: you can have up to 5 of them in your table.

Local secondary indexes are the almost the same, the differences are: instead of creating a hash and an optional range, the primary key is the hash, meaning it will have to appear in the query. They are also used to partition your table, meaning that they cannot be changed after the table is created.

But after all, why do we still need to divide our data into smaller categories to search? Well, because if you are working with big data, you should divide your data into a smaller piece somehow, otherwise it will be just too hard to search. How can you divide it? Just find something in common that would separate the data nicely into homogenous groups.

Remember my other example, when I only wanted to search movies from 1992 to 1999, but without scanning the whole table? How could we do this? Let’s think a bit about this example: why would you query this? If you are querying this because your website offers a list of “all movies released from the year X to Y in the Z decade”, you could make use of this common ground, create an attribute for it, and index it like this (I’ll call it DecadeIndex):

id (PK) title decade (GSIH) year (GSIR) category
 1  The Godfather  70  1972 1
 2  GoldenEye  90  1995 1
 3  Pirates of Silicon Valley  90  1999 2
 4  The Imitation Game  00  2014 2

Now look: we have a hash index (decade) that covers all the possible results that we want, and we also have a range field (year). We can search it with:

TableName : "Movies",
IndexName: "DecadeIndex",
ProjectionExpression:"id, title, year",
KeyConditionExpression: "#dec = :v and #ye between :y and :z",
ExpressionAttributeNames: { "#dec": "decade", "#ye": "year" },
ExpressionAttributeValues: { ":v": 90, ":y": 1992, ":z": 1999 }

If I didn’t type anything wrong, we would get the movies GoldenEye and Pirates of Silicon Valley.

If you are like me, you are probably thinking: “Ok, but what if I wanted movies from 1992 to 2005? This will span more than 1 decade”. This also simple to solve: if this is a possibility, you could have another index with the same functionality, or simply query once per decade – it seems costly, but since the entries are indexed, the operation will still be infinitely faster than doing a scan (and probably faster than doing the same operation in an SQL database).

In conclusion, DynamoDB seems to be extremely efficient for operations in tables with enormous amounts of data, but it comes with a price: you must plan the structure of your database well and create indexes wisely, having in mind what searches you will be doing.

by henrique coelho at September 21, 2016 09:55 PM

Why DynamoDB is awesome and why it is not

I made a new post about DynamoDB and how to solve its limitations: Getting acquainted with DynamoDB


We still don’t know for sure which technologies we are going to be using for our API, including the technologies for the databases; the two main technologies we are focusing right now is DynamoDB and PostgreSQL. Most developers are already familiar with PostgreSQL: it is an open-source, free SQL database, similar to MySQL; DynamoDB, however, is a No-SQL, proprietary database that belongs to Amazon.

We made our research and tried both, these are our impressions and the main differences:

DynamoDB PostgreSQL
Structure NoSQL SQL
Documentation Misleading and confusing Good
Price on AWS [1] Cheap and flexible Fair, but not flexible
Syntax Has its own syntax SQL
Easiness to use [2] Fair Very easy
Scalability on AWS Excellent Good
Performance on AWS Excellent Good
  • [1] AWS = Amazon Web Services.
  • [2] May be misleading, since we come with an SQL perspective, so there is not much to learn in order to use PostgreSQL. In fairness, DynamoDB does a good job on being intuitive.

It seems that DynamoDB is a fair competitor, however, it may have a dealbreaker: the way it handles indexes and queries. To explain this, let’s suppose we have the following table called Movies (NoSQL doesn’t have tables, I know, can you stop being pedantic pls? Besides, this is actually the correct name for tables in DynamoDB: table):

id (PK) title year category
 1  The Godfather  1972 1
 2  GoldenEye  1995 1
 3  Pirates of Silicon Valley  1999 1
 4  The Imitation Game  2014 1

Just a disclaimer before I start explaining the differences: the documentation for DynamoDB is very obscure, so it is possible that I am missing some pieces of information or simply misunderstood them. So, in DynamoDB, your primary key must be a hash field – it is unique, but cannot be searched as a range (you can’t search for “id between 1 and 5”, for instance). You can, however, specify another column to be a range (year could be a range). For this example, there is only one index: id.

In order to select all the data from the table, this is how we could do in SQL:

SELECT id, title, year FROM Movies;

This is how we could do with Dynamo (it may have an error somewhere, I can’t test it now, just bear with me, ok?).

TableName: "Movies",
ProjectionExpression: "id, title, year"

Nothing incredible, right? ProjectionExpression are the fields we are looking for. This kind of operation is called a scan – it scans all the table and gets all the results. So how would we search for a specific ID, say, ID 3? In SQL:

SELECT id, title, year FROM Movies WHERE id=3;

In DynamoDB:

TableName : "Movies",
ProjectionExpression:"#k, title, year",
KeyConditionExpression: "#k = :v",
ExpressionAttributeNames: { "#k": "title" },
ExpressionAttributeValues: { ":v": 3 }

Weird, right? But the idea is actually simple: #k and :v are placeholders – #k is ‘id’ and :v is ‘3’, just like variables and their values. KeyConditionExpression is the condition, ExpressionAttributeNames are the “map” for the keys, ExpressionAttributeValues are the “map”for the values.

So far so good, but here is the catch: when you create a table in DynamoDB, you have to specify a primary key which is also the index, and you cannot make a query that doesn’t use the key in the condition. What I mean by this is that, say you want to “find the movies made in the 90s”, putting the condition in the query… Well, in principle, you can’t, simple as that – because you are not using the primary key in the condition. There are, however, workarounds for it: doing scan and filter, and using secondary indexes.

The first alternative is doing a scan in the database (getting all the data) and then filtering it like this:

TableName: "Movies",
ProjectionExpression: "id, title, year",
FilterExpression: "#yr between :start_yr and :end_yr",
ExpressionAttributeNames: { "#yr": "year", },
ExpressionAttributeValues: { ":start_yr": 1990, ":end_yr": 1999 }

Seems simple, but it has a big drawback: you will actually pull ALL the data from the database and then filter it – this is often unacceptable if you have large quantities of data.

The other alternative is doing what they call “secondary indexes”, and this is where things get complicated: secondary indexes can be local or global – local indexes can be queried, but must still be dependent on the original hash key (the primary key), global indexes can be queried, HOWEVER, they must rely on another hash, one that is not the primary key. If we made a global secondary index for year that used the category as the hash, we could query for the “movies made between 1990 and 1999 which belong to category 1” (assuming that category is the hash and year is the range) like this:

TableName: "Movies",
IndexName: "CategoryYearIndex",
ProjectionExpression: "id, title, year",
KeyConditionExpression: "category = :cat and year between :ys and :ye",
ExpressionAttributeValues: { ":ys": 1990, ":ye": 1999, ":cat": 1 }

Which is reasonable, HOWEVER, global secondary indexes also have problems: you are still tied to a hash, and you have to pay to use them.

Alright… But that does not really answer the question: how can I use it to “select movies from 1990 to 1999”, without using another hash? Well, as we understood from the documentation, the only way around this is scanning your whole table and filtering it. Not ideal, HOWEVER, local secondary indexes kind of solve this: I read in another blog post that if you do scans filtering secondary queries, it is still very performant and won’t be as costly as fetching all the data. HOWEVER, local secondary indexes can only be made in the moment of creation of the table: you cannot change or add them, which is not exactly scalable.

It seems that DynamoDB is really powerful and easy to use if you want to make simple, fast queries to only retrieve values without too many conditions – it will really shine in these situations. But if your system requires more unusual queries and your indexes may change over time, I don’t think DynamoDB is a good choice; you can work around these limitations, but I feel like you will just be swimming against the current.

by henrique coelho at September 21, 2016 12:12 AM

September 20, 2016

Matt Welke

DynamoDB Not So Dynamite

Today we checked out the wonders of NoSQL, specifically with DynamoDB. As we get closer to finalizing our plan on what technology to use for the project, we wanted to investigate the database the client likes to use for their setup right now. That’s DynamoDB, a NoSQL database-as-a-service that’s apparently quick to setup, easy to use, and infinitely scaleable. It’s scalability comes from the fact that Amazon abstracts out all the details of maintaining a robust database to store tons of data. We would simply throw data into it.

The problem is… throwing data in and pulling data out seems to be all DynamoDB is capable of. It excels at a few things. Its speed and scalability are amazing. But due to its nature, you cannot query a table based on attributes alone without iterating over every row in the table and checking it. You need the primary key to do a query. This is useless for data analysis. We need something where we can ask it “what are the articles published between such and such date that fit into such and such category” etc. We need something more capable than DynamoDB for the kind of work we’re going to be doing. Luckily, Amazon does offer another automatically-managed database service… RDS. We need to worry about instances, how powerful they should be, and it’s billed by the hour, not by the millisecond, but it’s able to do what we need. It allows databases such as PostgreSQL to run on it, which we strongly believe at this point we will need.

The rest of the week will be used to prepare our plan to submit to the client on Thursday and hopefully we’ll get to start building at that point! I’ve been anxious to start building. So far, it feels like we haven’t done anything measurable or useful. But I suppose when it comes to programming, it’s best to measure thrice and cut once.

by Matt at September 20, 2016 10:54 PM

Matthew Marangoni

Localization with React - Supporting a Multitude of Languages

In the HTML5 client, we've been working to implement an open-sourced method for translating text throughout the application (known as localization). What seems to be the most popular and compatible method for us is including the yahoo/react-intl package to support localization. One of the issues with this package is that it is not optimal for BigBlueButton's application in the sense that each language must be statically imported and loaded along with the application each time it is loaded. In most cases, this is not an issue since the majority of websites and web apps only support a handful of languages. For the BigBlueButton project however, anywhere from 50-100 languages will need to (eventually) be supported. This means, every user would potentially be loading all 100 languages every single time they want to use the HTML5 client, when in reality they will only need two or three  languages at most.

Right now I'm trying to determine if there is a configuration file in this package that we can use to do a few things:
  1. Detect the browser language and region (i.e. pt-BR) for Portuguese as spoken in Brazil
  2. If the language and region combination is not found, try instead for just the language (i.e. pt). Regions are not required, only optional.
  3. Set the application's default language to en-US.
After looking over the issues and pull requests on the react-intl github page, it doesn't appear that there is anything already in progress that will suit our needs, therefore we may be forced to come up with our own custom solution that works with this package, or find a different, more suitable one.

Another small issue that exists with this package is that all FormattedMessage's require a default message attribute which is identical to the message found in the en.json file. Problems that arise from this are that when a message needs to be altered, it will need to be changed in two places anywhere that text which requires translation can exist. The other issue is that when a default message is left out, screen readers will instead read the id attribute of the formatted message, which has no value or meaning to the user. Currently these are the two major issues we've encountered with the react-intl package and are in the process of seeking a solution or alternative.

by Matthew Marangoni ( at September 20, 2016 08:43 PM

September 19, 2016

Jaeeun Cho

Localization and Internationalization.

As the world is getting bigger and online users are increasing rapidly, websites or applications need to provide the websites which is proper in the different nations especially for the international companies. Because each country use different languages and formats for date, time, currencies and others. Unfortunately, it is easy to overlook to consider users from a different countries. To fix these issues, we need to understand 'Localization' and 'Internationalization'.


Localization refers to the adaption an existing website to local language and culture in the place that will using the current website.It is sometimes written as 'i10n' which is 10 number of letters betwen i and n. A successfully localized website is that users can feel naturally when they use the website.


Internationalization supports the languages and other formats of people from the different world and cultures. It is often written 'i18n' which is eighteen letters between 'i' and 'n'. In the websites or applications, languages or formats are typically localized in according to the users by locales. 

Internationalization might involve the below:

  1. Barriers might remove in designing and developing to localization or international deployment. This entails the enabling the use of Unicode(UTF-8), or the proper character encoding.
  2. Add markup to support bidirectional text, or languages. Or add CSS support for vertical text.
  3. Code should support local, regional, language, or culturally related preference. For example, each country has the different format of date, time, calendars, number and even name and address.


A "locale" is a collection of language-related user preference information represented as a list of values. The locales argument must be either a string in BCP47( or an array of language tags. If there is no arguments in locales, the default locale is going to be used. The language tag which is languages, scripts, countries and others in BCP47 can find in the IANA Language Subtag Registry.

by Jaeeun(Anna) Cho ( at September 19, 2016 11:35 PM

Matt Welke

Lambda vs. Docker

Today was heavy on research… We want to make sure we choose the right technology for our back end. Our client wants to use Amazon Lambda, which is the most popular of the new “serverless” architecture services, and we at first wanted to just make an application and push it to a cloud service. The solution is probably going to be a compromise in between.

We wanted to just create an application because that’s what we’re comfortable with. We’re students. We tinker with new technologies and quickly make prototypes and host them. We don’t necessarily care about supporting them. In order to be effective developers in the real world, we need to be willing to see things from the client’s perspective. They want to use “serverless” architectures. This is a bleeding edge new type of cloud service where you don’t have to think of the server you’re going to run your code on because they provision the servers for you. That’s what they mean by “serverless”. The server is there, it’s just abstract now. (I think of it like encapsulation with object oriented programming, the low level code actually doing things is there, it’s just abstracted by my methods). With the serverless architecture, you don’t even have to think of the application you’re going to create, just the code inside it, because it will actually take raw code and just run it on applications that it creates, on servers that it provisions. On paper, it sounds wonderful. But we are definitely running into some pain points as we try to adapt to this new style.

Most of the problems with using serverless architecture that we’ve encountered seem to come with a general unfamiliarity with the Amazon Web Services online interface. But a lot of the issues probably also stem from the relative immaturity of the serverless architecture compared to more traditional methods of doing web development. This sentiment was echoed by a blogger I encountered who described Amazon Lambda as being “not ready” just seven months ago. From when we tested it out, we encountered tons of digital red tape so to speak. We couldn’t just pass parameters to code to do stuff with them. We had to write middleware and config files just to get the parameters into our code.

An alternative we’re considering is using Docker. It has the advantages of an actual application (we get our parameters! yay!) but it also has that abstract slightly serverless style that the client is going for. It’s supposedly independent of the server you’re going to run your Docker “images” on. The client should be able to take a Docker image we produced and easily get it running on any Docker-supporting cloud provider they wish. And that includes Amazon Web Services, where they currently have everything running. Docker itself has a learning curve that my team mate and I will need to get through to go this route, but I’m confident we can get comfortable with it. Today, we were indeed able to get something running locally and pushed to Amazon, so this so far looks much more feasible to work with than Lambda.

One thing’s for sure, in the past few weeks I’ve learned that there’s a lot more to web dev in the real world than I imagined!

by Matt at September 19, 2016 09:35 PM

Laily Ajellu

Audio and Video in your App - Get "A" certified

In last week's post (Input Forms Accessiblity - Get "A" certified) , we started to analyze what our web apps need to reach level A of accessibility certification.

Now we'll analyze how to make the audio and video in your web app accessible.

Why make Time Based Media more Accessible

Alternatives allow your content to be easily duplicated or re-purposed, which can make your application more popular and available to a wider audience.

For time-based media (media where info unfolds over time - and video), we provide alternatives for the following audience:
  • Visually disabled users
  • Users who have less knowledge of the topic
  • Hearing disabled users
  • User’s with time limitations

How to make Time Based Media more Accessible

  1. Link to a text transcript containing the content below - right beside the link to a video/audio page

  2. Link to an audio MP3 - right beside the link to a video page. The audio MP3 should be a mix of the audio extracted from the video and the content below

  3. Add captions to the video

Content of transcripts, captions and supplementary audio

  1. Identify the name of the person who is speaking
  2. Identify if the speaker is a person from the audience or the main speaker
  3. Identify if the statement is a question or a part of the main content
  4. Mention if there's applause
  5. Mention if there's laughter
  6. Mention when music is being played
    • Identify non-vocal music:
      • Title
      • Movement
      • Composer
      • Tempo
      • Mood
    • Transcribe the lyrics of vocal music
  7. Note other significant sounds that are part of the recording
  8. Description of what’s being show in the video
    • Actions
    • Characters
    • Scene changes
    • On-screen text

    Note: If there is already a text transcript of a video/audio, captions are not required

Special Cases

Interaction with a Video

If you have to interact with the video (Click here to continue/Choose an answer), you must provide a keyboard accessible way to interact, and include it in the transcript or text summary of the video

Video tutorials (without Audio)

  • Make sure any text in the video is placed in a transcript
  • If there is no text in the video, add a brief summary of what is shown in the video

by Laily Ajellu ( at September 19, 2016 08:37 PM

September 17, 2016

Andrew Smith

What does a POSIX signal handler and an SQL transaction have in common?

Since I expect this to be a long post, I’ll give you the answer at the top: both are in-effect critical sections, you should avoid performing unnecessary operations there at all cost. Or else it will cost you and other people days, months, years of wasted time.

POSIX Signals

I’ll start with a story about one of my successful open source projects: Asunder. I took over the project from a nice but busy guy at version 0.1. Most of it was written (pretty well too), and I’ve been fixing things here and there, improving it one bit at a time.

One of the things I added was logging, to make sure that I can fix problems experienced by others in their own environments. It was very useful (I now very rarely get bug reports) but I made one mistake: I added an fprintf in a SIGCHLD signal handler. It took me literally years to figure out that was a terrible mistake. For at least two years I kept getting bug reports about random, unrelated freezes and the log never provided any answers. This is what was happening:

  1. The app was running, starting sub-processes and waiting for them to complete.
  2. When a sub-process completed – it sent a SIGCHLD to the parent. That signal was handled in a signal handler, which interrupted whatever code was currently running in the parent.
  3. The above is expected and rarely a problem, except it turned out that the printf function makes some kind of global lock while it’s doing it work. So when:
  4. A signal handler itself was interrupted by another signal, the printf in the new signal handler was waiting for the old printf to complete, which would never happen because the original printf was interrupted by the new one.

When I figured that out I cried a little in my mind. But I fixed it and took it as a good addition to my industry experience.


Fast forward a few years where I maintain a MediaWiki instance for our school. I migrated it to a new server, updated the PHP to the newest version, updated the database, etc. All worked well.

But then the semester started and new students were trying to register to get accounts on the wiki. And disaster struck. It turned out that new users could not register. To make thing worse – existing users couldn’t get their passwords changed or reset. Right in the beginning of the semester. When I did the migration I tested everything, but have not considered that operations on the user table were in any way special. Turns out they’re not, except they are. Here’s what was happening:

  1. The first person since the servers were rebooted tried to register for an account. The web interface would just hang there, with the spinning circle until the end of time. Not time outs or error messages.
  2. MediaWiki started an SQL transaction on the MySQL backend. To record that a user is being created.
  3. Before committing the said SQL transaction – MediaWiki would attempt to send an email to the new user via some PEAR library, via the server configured in $wgSMTP.
  4. $wgSMTP was not configured correctly, and the step above never completed.
  5. Which means the SQL transaction was never committed.
  6. Which means the users table remained locked, permanently.

I spent so much time (including three overnighters) figuring this out! I ended up nearly desperate, asking for help on the MediaWiki-l mailing list. One guy (Brian Wolff) replied saying he doesn’t know what the problem is but he offered what turned out to be the straw I needed ti figure it out myself: enabling the MediaWiki debug log. I had a bunch of logging enabled already, but this is the one that showed me the deadlock.

Before that, I would stare at MySQL’s “SHOW FULL PROCESSLIST” and wonder how it’s possible that even though no queries were being executed – new ones would result in a timeout like this:

MySQL [cdotwiki_db]> SELECT user_id FROM `mw_user` WHERE user_name = 
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transactio

I would look at the output of “SHOW ENGINE INNODB STATUS;” and wonder why there are multiple transactions there that have been sitting for hours but not causing a deadlock, even though it looked like a deadlock. I spent hours trying to decipher memory dumps like this:

---TRANSACTION 4748, ACTIVE 62 sec
8 lock struct(s), heap size 1136, 4 row lock(s), undo log entries 8
MySQL thread id 1345, OS thread handle 140329361524480, query id 20771 
web-cdot1.sparc cdotwiki_usr cleaning up
Trx read view will not see trx with id >= 4742, sees < 4742
TABLE LOCK table `cdotwiki_db`.`mw_user` trx id 4748 lock mode IS
RECORD LOCKS space id 70 page no 157 n bits 624 index user_name of table 
`cdotwiki_db`.`mw_user` trx id 4748 lock mode S locks gap before rec
Record lock, heap no 244 PHYSICAL RECORD: n_fields 2; compact format; 
info bits 0
  0: len 6; hex 41736f583139; asc AsoX19;;
  1: len 4; hex 000001ac; asc     ;;

Record lock, heap no 558 PHYSICAL RECORD: n_fields 2; compact format; 
info bits 0
  0: len 8; hex 41736d6974683230; asc Asmith20;;
  1: len 4; hex 000036c2; asc   6 ;;

TABLE LOCK table `cdotwiki_db`.`mw_user` trx id 4748 lock mode IX
RECORD LOCKS space id 70 page no 436 n bits 112 index PRIMARY of table 
`cdotwiki_db`.`mw_user` trx id 4748 lock_mode X locks rec but not gap
Record lock, heap no 42 PHYSICAL RECORD: n_fields 17; compact format; 
info bits 0
  0: len 4; hex 000036c2; asc   6 ;;
  1: len 6; hex 00000000128c; asc       ;;
  2: len 7; hex 21000001362118; asc !   6! ;;
  3: len 8; hex 41736d6974683230; asc Asmith20;;
  4: len 0; hex ; asc ;;
  5: len 30; hex 
3a70626b6466323a7368613235363a31303030303a3132383a2f66545962; asc 
:pbkdf2:sha256:10000:128:/fTYb; (total 222 bytes);
  6: len 0; hex ; asc ;;
  7: len 21; hex 61736d6974683230406c6974746c657376722e6361; asc 
asmith20 at;;
  8: len 14; hex 3230313630393135303530303137; asc 20160915050017;;
  9: len 30; hex 
623061346535323762613365336462656133323035633666343564663163; asc 
b0a4e527ba3e3dbea3205c6f45df1c; (total 32 bytes);
  10: SQL NULL;
  11: len 30; hex 
396561386335613365663263623666353062303736646165393934393331; asc 
9ea8c5a3ef2cb6f50b076dae994931; (total 32 bytes);
  12: len 14; hex 3230313630393232303530303130; asc 20160922050010;;
  13: len 14; hex 3230313630393135303530303130; asc 20160915050010;;
  14: SQL NULL;
  15: len 4; hex 80000000; asc     ;;
  16: SQL NULL;

TABLE LOCK table `cdotwiki_db`.`mw_watchlist` trx id 4748 lock mode IX
RECORD LOCKS space id 70 page no 157 n bits 624 index user_name of table 
`cdotwiki_db`.`mw_user` trx id 4748 lock_mode X locks rec but not gap
Record lock, heap no 558 PHYSICAL RECORD: n_fields 2; compact format; 
info bits 0
  0: len 8; hex 41736d6974683230; asc Asmith20;;
  1: len 4; hex 000036c2; asc   6 ;;

TABLE LOCK table `cdotwiki_db`.`mw_logging` trx id 4748 lock mode IX
TABLE LOCK table `cdotwiki_db`.`mw_recentchanges` trx id 4748 lock mode IX

and getting no closer to figuring it out. In the end – how I found the problem was a single log line on a debug instance of the server. What an adventure!


The bug in Asunder happened because I ignored the warnings in the glibc manual that told me to keep unnecessary code out of signal handlers. I did not at the time know (or even consider) that printf could lock some kind of global structure, which could eventually cause a deadlock.

The bug in MediaWiki happened for the same reason, except they ignored the MySQL manual: “keep transactions that insert or update data small enough that they do not stay open for long periods of time”. I’m sure their code is a lot more complicated, but at the end of the day – they are sending an email in the middle of an SQL transaction, which is just a disaster waiting to happen. There’s no way I’m the only one who ran into this problem.

I’ll report the bug and we’ll see if they take it as seriously as I took my Asunder bug.

by Andrew Smith at September 17, 2016 06:29 PM

September 16, 2016

Matt Welke

Checking Out Elasticsearch

One of the technologies said they currently use is Elasticsearch, so we needed to set out to learn what that was and how to integrate it into what we are creating. At first I thought it was just some Amazon proprietary technology. Not that this mean’s it’s bad, just that I get more excited about open source technologies, because if I learn how to use them once, I can use them freely in anything else I create, be it open source or other projects at work.

I was pleasantly surprised to learn that it was actually a pretty stable, useful, open source framework. It’s basically a program you run that abstracts out a lot of the complex data analysis and machine learning type stuff, so that you can do queries against it similar to how you’d do queries against a database. “Show me articles that are similar to this one…” etc. It uses its own data store to perform these complex queries and filters from. So some people will just use the Elasticsearch data store as their main database, or they just need to figure out how to sync their database with it if they’ve already got something started and they want to add Elasticsearch to it. This is our situation. has a primary data store in their CMS containing their articles, etc. They already sync it with Elasticsearch, and we’ll need to make sure we integrate our information that way too if that’s applicable.

I spent some time today learning about Elasticsearch as an open source framework, that way when it comes time to use Amazon Elasticsearch, Amazon’s implementation of it that already uses in production, I’ll better understand it. They have an an official book which is available online for free, and I was able to find some nice YouTube videos which show you how to dive in. So far, my team mate and I imagine our system being a combination of querying the data we collect and the stats crunching Elasticsearch can do, to return really useful information for the client. I’ll be focusing a lot on continuing to learn and practice using Elasticsearch over the next little while.

by Matt at September 16, 2016 08:27 PM

Anderson Malagutti

iOS 10: Unlock iPhone without having to press the home button

Since the iOS 10 update, users are now ‘forced’ to press the home button to unlock their iPhones (by default); however, I’ve just found a way that you can change it, and make your iPhone unlock process work as well as on older iOS’s if you have Touch Id.

It’s very simple.


Then you’ll have to check the option REST FINGER TO OPEN.



After that you’ll be able to unlock your iPhone just resting your finger on the home button as you probably used to do on the iOS 9.🙂


by andersoncdot at September 16, 2016 03:49 PM

September 15, 2016

Henrique Coelho

Planning and technology limbo

In the last few days we spent a lot of time planning the system: iterating over the database schema, how to implement the APIs, how to implement the client modules, and how all these pieces fit together. This often means that one part will influence the other, until we finally find a setting that fits together and works.

I usually enjoy this part of the project, planning involves a lot of thinking and good strategy – like solving a puzzle, but it can be very stressful sometimes. What I don’t like about planning is that it takes time, and during this time, you end up floating in limbo: you can’t make concrete plans because you don’t know if they will hold up in the long term. The technologies we are considering now for the project are MySQL, AWS Lambda + Gateway, and AWS Elastic Search.

The capabilities of PostgreSQL that I described in the previous post seem to be supported in MySQL 5.7, which makes it a suitable candidate for a database; however, we need to make sure it is capable of enduring the traffic. For the past few days, I’ve tried n times and failed n-1 times (for now) to create a suitable testing scenario for MySQL. The scenario is simple: set up a VM with MySQL and bombard it with hundreds of millions of rows (with numbers and JSON data) and see what happens – if it behaves as it should, we query it until it breaks. Seems simple, but the universe was trying to thwart my plans (and succeeding) in the past few days:

  • 1st try: The internet was strangely slow that day, when I started the download of the VM. One hour later, it finished: the download was corrupted and I had to start over.
  • 2nd try: VM installed, but the version of the MySQL was wrong and I had to update it – obviously, I broke the installation beyond repair and I just rebuilt a new VM.
  • 3rd try: VM installed and MySQL updated. I also made a little, neat script that inserts batches of 1,000 random registers in the database and let it run for a while. The result: 55,000,000 rows inserted. “Great! Now I can start testing” – I messaged myself mentally. After some informal tests, it was time to go home and I had to insert more registers; we decided to let the script run overnight, but first, “does it stop when we lock the computer?” – we thought, and decided to try. Any sensible person would backup the database before doing this, but 55 million rows really takes a while to download; besides, we are beyond sensible, so we locked it anyway: that’s how we corrupted the third VM.
  • 4th try: We quickly set-up the database again (just made a new table) and left the scrip running overnight. During the commute, we were betting the results: I bet 80% on the VM being corrupted, 15% the script broke somehow, 10% someone turned the computer off, and 5% it worked – the fact that it sums up to 110 does not matter, what matters is the general idea. The database was corrupted.
  • 5th try: New VM made, we left the script running for a few hours (reaching around 80 million rows) until the VM completely ran out of space; with a few adjustments, we increased the space and let it run a little more. Tomorrow we will run it again to insert more registers.

So that was the MySQL saga until now. The other limbo that we are floating in is the technology regarding the API: the client suggested that we used AWS Lambda + Gateway, and maybe AWS Elastic Search. These services are really nice (I will probably post about them, if we get to learn more about it), but Lambda + Gateway seem to be a very simplified “framework” for an API – I am afraid that in the future we will have to modify it to be more robust and it will just not work. Although I would like to use them, I fear that the bureaucracy of AWS and its not-intuitiveness will hurt us more than help.

by henrique coelho at September 15, 2016 11:01 PM

Matt Welke

Refining the Schema and Media Queries

Today we took another look at the schema for our back end API to iron out what it was we would be storing. With some insight from our team lead, who is experienced with web development, we identified more things we should store, but also ways to simplify the schema we had originally developed by storing less. For example, we originally planned on storing a “hit” when a visitor loaded a page and then coming back to that hit to add more info when the user did other things. But we’ve now reduced it to a simpler model that acts more like a log. Once a hit is written, it’s written. We don’t muck things up by revisiting its entry. I think the simpler we make the schema from the beginning, the less problems we’ll encounter later on.

I plan on investigating media queries thoroughly. They’re used to find out information about the client’s device and environment. And they can be very powerful. For example, they can tell when a device is low resolution or high resolution, and decide to send either a desktop or mobile version of the website to the client to view. But, they can tell when a client is a mobile phone. So if it’s a modern mobile phone with a high resolution, it wouldn’t send the desktop version because that wouldn’t make sense to view on a phone. We can mine this for our analysis. It’s a wealth of information that can help us learn how’s visitors are viewing the site.

by Matt at September 15, 2016 08:26 PM

September 14, 2016

Matt Welke

Data Mining Scenarios and MySQL

Today we continued to look into how to store the data we will be mining. We looked at our schema and decided it would be a good idea to abstract out our “visit” entity. A visit is going to be a series of hits (page views). They will be related to each other because they’re from the same visitor. We can link them together and to a user either by using the user’s login session or with cookies if they aren’t logged in. Instead of worrying at this phase about what constitutes a visit (Is it when they haven’t returned for another hit in over a half hour? Is it that a visit ends when they close their browser window?), we can just calculate the visits later on with parameters. We can link the hits together with these parameters to present the visits given those parameters. This may for example be an SQL view instead of the results of querying an SQL table.

After learning that the client would prefer we not use Postgres and instead use MySQL or SQL Server, we decided to benchmark MySQL, specifically how quickly it can read JSON data types. It turns out modern MySQL also supports JSON as a native type. It can even do indexes on JSON columns to speed up read queries. Coupled with the fact it is also free and open source, we’re mainly looking at it at this point for our back end. Given the log data we got from the client, where we learned how many unique visitors they get per month, we ran some scenarios to find out how many queries per second our system would need to handle. We’re still well within doable range. We’re confident we’ll be able to get a system ready that’s capable of handling the queries, with room to grow for the future.

by Matt at September 14, 2016 08:31 PM

Back End Routes and Promises

Today I contributed to work on the back end system by working on the routes. This meant creating a file for each HTTP verb in each folder for every model we expect to have in our application. For example, we expect to log visits, so we needed a “visits” controller and we needed to fulfill the read all, read one, create, update, and delete actions (or modes). A particular combination of HTTP verb and the presence of an “id” parameter implements these:

  • Read all – GET without an id
  • Read one – GET with an id
  • Create – POST without an id
  • Update – POST with an id
  • Delete – DELETE (with a mandatory id)

Using the HTTP verbs means we can have shorter, simpler route names. We can use GET for read one, and POST for update, so that we only need a “/visits” route, instead of having two routes: “/visits/read” and “/visits/update”. We still have to make two routes, but shorter route names are always nice.

I again learned more about Node.js today as I tried to link all the routes into the application, and debug why they weren’t registering when I ran the server on our local machines. I learned about using “require” and the “module.exports” object in Node.js. I also learned about promises, and how they can help you code asynchronously without using callbacks. Thank you Mozilla Developer Network! ^_^ Instead of callbacks, you can chain JavaScript function calls:

new Promise(
    (resolve, reject) => {
        if (finishBlogPost()) // maybe this isn't successful?
    // this runs if resolve() was called
    (successValue) => {
    // this runs if reject() was called
    (failureReason) => {

by Matt at September 14, 2016 03:29 AM

September 13, 2016

Jaeeun Cho

Implemented Actions-dropdown list for actions button

I've worked with dropdown components to change reusable component for a long time but it is refactored. Because the components are strongly coupled and dependent. I thought I understood what coupling and dependency are, but I was wrong. Knowing and understanding are totally different.

Dependency is the relationship between two classes. If a class Vehicle uses a class B, then A depends on B. If a class A cannot be reused without a class B, then a class A is dependent and a class B is dependency. According to this, my components have problems as a reusable component.

My next task is to implement actions button with reusable dropdown components. When I did with new component, it was more convenient than old version of dropdown. I didn't need to pass a lot of props in component. I'm still figuring out how new dropdown components work as a reusable component.

It has some issues related to the font and icon style. It will update later.

I'm also studying about Internationalization API in ECMAScript and why we need to use this api for handling messages. ( and underscorejs (

by Jaeeun(Anna) Cho ( at September 13, 2016 11:28 PM

Laily Ajellu

Input Forms Accessiblity - Get "A" certified

In this post we’ll be discussing how to create accessible input forms to reach level A of accessibility certification.


In total, there are three levels of accessibility certification for your web app:
  1. A
  2. AA
  3. AAA
By law, you must be A certified if you are:
  • a private or non-profit organization with 50+ employees; or
  • a public sector organization
Starting in 2021, your app must reach level AA

Input Fields

One of the factors for achieving A level is Error Identification, here's how to make error identification more accessible:
  1. Notifying a User when they have incorrect input
    It's best to programmatically connect an error message with a failed input field. But if you can't, set aria-invalid = "true" on the failed fields
    • It should not be set to “true” before input validation is performed or the form is submitted.
    • If it’s done programmatically, you’re not required to use aria-invalid
  2. Displaying error messages:
    To display an error message, give it role="alertdialog"

  3. How to design the Form:
    • Give each input field an HTML5 label tag, so that it will be read by the screen reader
    • Place each label beside the field so users that use zoom will be able to see both at once
    • Give examples for input that needs to be formatted a specific way like dates and addresses
    • Use aria-labelledby to tell the user a field is required. It doesn’t mean you can’t use asterisks, color or other visual cues, you just need both.
    • Group together related input fields visually but also using roles. For example: put the your radio button input fields into a div with the role="radiogroup"

If you want to explore these concept in more detail, see the Reference below!

Reference and Credit: - How to make Websites Accessible

by Laily Ajellu ( at September 13, 2016 10:55 PM

Matthew Marangoni

Hardware & Software Troubles

Unfortunately, I've had a lot of hardware problems in the past week. My hard drive began to fail early in the week, and after relocating my desk, my first thought was to perform a drive clone to avoid have to reinstall Windows and my Virtual Machine + other software. I soon learned that unless you have powerful hard drive cloning software (like the ones used for forensic investigations on hard drives), it is near impossible to clone a drive using free cloning software once it has bad sectors. Cloning is generally performed on healthy hard drives. I had read some suggestions that doing a Windows backup and restore was a good alternative to cloning, this however also failed due to the bad sectors. It appears that once a clone or backup operation encounters a bad sector, it does not know how to handle it (ideally it should skip/ignore these sectors and continue, or attempt to recover them if the software is advanced enough).

After having no success with cloning or backups, I decided I would just make a copy of my virtual machine so that I would only have to reinstall Windows and could keep my dev environment intact. Once Windows was installed, along with all my other required software and drivers I attempted to load and restore my virtual machine. To nobody's surprise, this also failed because VMware Player failed to recognize any of the files as a valid virtual machine (files that were created by VMware Player itself; so I don't know what could have caused this issue). So once again I had to reinstall my VM from scratch which brings me to my current state where I can start to resume work on the BigBlueButton project again.

One final thing I attempted - since I had upgraded to three smaller solid state drives, it was suggested to me that the best thing to do was set them up RAID 5 so that I could have two hard drives to work with and one in case of a failure. After reviewing the process on setting up hard drives in RAID 5, I learned that it is impossible to set this up in Windows 7 without a separate hardware RAID controller. The next best solution would be to set up RAID 0, but for some reason Windows installed the reserved boot sequence on SSD 0, and did the rest of the windows installation on SSD 1, making RAID setup impossible without a complete Windows reinstall. I am continuing to work now with the 3 separate hard drives since restarting this process to fix the Windows install for RAID would consume more time than it's worth.

by Matthew Marangoni ( at September 13, 2016 03:59 PM

Henrique Coelho

Why PostgreSQL is awesome

I was supposed to post this update on Friday (September 9th), but I forgot, so I decided to post it on Saturday (September 10th), but I forgot again; so then I decided to post it on Sunday (September 11th) and I forgot again; so I’ll post it today.

One of the most important (or maybe the only one) features behind the popularity of NoSQL is its ability to store data without a schema, in other words: forget about tables, you store anything you want in JSON. This flexibility comes in really handy when the data you need to store is a bit unpredictable, but still needs to be indexed and searched – normally we can overcome this by doing some complicated workarounds with the schemas, but that is where NoSQL really shines. Where it doesn’t shine, however, is where you actually need relational schemas and organizing data in a very cohesive way.

Myself, I’ve never been a big fan of NoSQL: I love JSON, and I love to store information in JSON, but NoSQL never gave me the confidence of actually being reliable; thankfully, newer databases already support similar features for storing JSON. PostgreSQL accepts the data formats JSON and JSONB, which recognizes and treats JSON objects as actual columns.

For instance, the entry below contains a JSON object with the data of a person called John Doe, 54 years old, that lives in Toronto/Ontario.

TABLE people

 id | doc
1   | {
    |   “city”: “Toronto”,
    |   “province”: “Ontario”,
    |   “person”: {
    |     “name”: { “first”: “John”, “last”: “Doe” },
    |     “age”: 54
    |   }
    | }

His first and last name could be retrieved using the following SQL query:

SELECT doc->'person'->'name'->>'first', doc->'person'->'name'->>'last'
FROM people WHERE id=1;

The syntax is fairly simple, and almost self explanatory, only with one detail: the arrow ‘->’ is used to retrieve a value of the object, while ‘->>’ retrieves it as a string.

The nice thing about this feature is that SQL can now be a mix of both worlds, it also means that instead of pulling query results from the database and computing/filtering them in the API, if necessary, this can be done directly in the SQL statement.

by henrique coelho at September 13, 2016 03:17 AM

Matt Welke

Getting Good at Git

Git has turned out to be so much more useful than I ever could have imagined. Today I again spent my time mostly learning as we started creating the back end system. We’re using WebStorm for a Node.js/Express.js project and I got to test my new Git knowledge and see how it all fits together. I can safely say my days of sharing code by tossing a USB drive or uploading a zip to Dropbox are over.

The tables in our plan for the back end are numerous, meaning many routes are needed, which I created today, while my team mate worked on unit testing and using some tricks he’s used before to streamline the code we’ll have to write for each route. Node.js concepts like middleware, promises, and generators are things he’s using to make our code look cleaner and reduce the amount of it we’ll have to write.

by Matt at September 13, 2016 12:13 AM

September 09, 2016

Matt Welke

Getting To Know Node

Today I spent a lot of time getting familiar with our chosen back end framework, Express.js. I’ve used other MVC web frameworks before, so it didn’t take long to get used to Express. Aspects like views, models, controllers, and routes were already familiar to me. However, I need to spend time getting familiar with Node.js itself. It has unique development tools, commands, ways of managing dependencies, etc.

It also is more modular than other web frameworks I’ve used. To compare it with Ruby on Rails, my usual go to framework, it gets out of your way and does nothing for you. No code is generated for you. You need to choose which modules to pull in and create variables to represent your server that you want to use. I’m used to a certain project structure enforced, where the frameworks expect certain files in certain folders, arranged a certain way, with a lot of stuff happening behind the scenes. Rails forces you to make controller classes in a controllers folder, but if you do, things just connect together. Node expects you to manually create routes and pass in functions as arguments as you do, which become the controller behaviour.

There are other concepts often used in JavaScript programming like closures, asynchronous programming, and AJAX that I looked at today with the help of my team mate, who is much more experiences with JavaScript than I am. I’m sure I will have no problem becoming familiar with them soon.

We also continued to look into our database schema for when we will be logging user data and looked into our choice of database technology. It turns out PostgreSQL has support for JSON data as a native type. It can query JSON objects and perform all the complex and powerful analysis we’re used to with SQL, without pulling the JSON object out (as a string) and storing it in memory. This could have an incredible impact on how quickly our system can help correlate things for the users and recommend articles etc for them.



by Matt at September 09, 2016 09:27 PM

Benchmarking Back Ends

Today we started looking into options for creating our back end API, which will be used to log the information we track about’s visitors. The back end will receive the data using RESTful routes. This gives us a lot of flexibility, since REST is a standard implemented by basically every web framework out there. Four frameworks came to mind as feasible:

  • Express.js (running on a Node.js web server)
  • Rails API
  • Good old, plain PHP (running on an Apache web server)

I love the sheer speed at which a developer can prototype and deploy things with Ruby on Rails, and my team mate has a strange love affair with JavaScript. However, we aimed to be objective and not choose to use one programming language or framework simply because we anecdotally liked it.  We decided to benchmark. Spoiler alert! I, for one, welcome our new JavaScript overlords.😀

We looked at the number of transactions per second the back end could accept, where a transaction was a request that involved accessing a database. We didn’t have time to properly test all the options (Rails was tricky to get set up), but some specific Express.js vs. PHP results so far can be found on my team member’s associated blog post.  In summary, we found that Express.js on Node.js ended up being incredibly fast, even faster than we thought it would be at first. We actually expected Apache with PHP to be quickest. We thought of it as the “C” of web programming languages. It’s simple and low level. It ended up being the slowest and the quickest to fail when the number of concurrent requests grew.

We didn’t get around to testing ASP.NET (and to be honest, we think we’ll ditch it anyways since we want to stay open source), and Rails API ended up performing somewhere in the middle. However, Rails needs to be further tested since we weren’t able to get it fully set up to take advantage of multiple CPU cores or even access the database. In all cases, we tested these against a PostgreSQL database. We strongly believe we’ll end up going with PostgreSQL because of its reputation as a powerful, stable, open source option.

We still have much to do here. We need to create realistic benchmarks. Real users don’t click on things or scroll their mouse wheel thousands of times per second. We need to create benchmarks that reflect the way people read articles online and engage by commenting, sharing, etc. Preferably, they would reflect the way visitors visit the site. Perhaps we’ll continually benchmark as we gain information by starting to monitor the users and make the benchmarks more accurate over time.

by Matt at September 09, 2016 04:13 AM

Henrique Coelho

Benchmarking PHP5 x Node.js

Long story short: one thing we did today was thinking what would be best language/framework to build an API: it should be stable under heavy load, fast, and capable of cpu-intensive operations; we ended up with 2 alternatives: PHP5 and Node.js and decided to do a little benchmarking to find out which one would be the best.

For the first test, we set up a server with virtual machines of Apache + PHP5 and another with Express + Node.js and used Siege, a stress tester, to benchmark both servers. Siege creates several connections and produces some statistics, such as number of hits, Mb transferred, transaction rate, etc. For both servers, we used 4 combinations of settings:

  1. 1 core and 1,000 concurrent users
  2. 4 cores and 1,500 concurrent users
  3. 1 core and 1,500 concurrent users
  4. 4 cores and 1,500 concurrent users

The tests consisted in a very simple task: receive the request of the user, perform a SELECT query in a database, and return the raw results back – we tried to keep the tests as similar as possible. The database used was PostgreSQL, located in another virtual machine.

These are the source codes we used for the tests:


var express = require('express');
var pg = require('pg');

var config = {
  user: 'postgres',
  database: '...',
  password: '...',
  host: '...',
  max: 10,
  idleTimeoutMillis: 30000

var app = express();
var pool = new pg.Pool(config);

var query = 'SELECT * FROM testtable;';

function siege(req, res, next) {
    pool.connect(function (err, client, done) {
        if (err) throw err;

        client.query(query, function (err, result) {
            if (err) throw err;

app.get('/siege', siege);

app.listen(3000, function () {
  console.log('Example app listening on port 3000!');


$connection = pg_connect("host=... dbname=... user=... password=...");
$result = pg_query($connection, "SELECT * FROM testtable");
echo $result;

These are the results:

Result 1 core
1,000 users 1,500 users
Node.js PHP Node.js PHP*
Number of hits 39,000 4,300 2,000
Availability (%) 100 95 66
Mb. transferred 11 0.06 0.56
Transaction rate (t/s) 1,300 148 800
Concurrency 655 355 570
Longest transfer (s) 0.96 28.14 1.16
Shortest transfer (s) 0.08 0.15 0.11
Result 4 cores
1,000 users 1,500 users
Node.js PHP Node.js PHP*
Number of hits 55,000 5,100 14,000
Availability (%) 100 98 93
Mb. transferred 16.02 0.07 4
Transaction rate (t/s) 1,800 170 1,700
Concurrency 19.6 424 73
Longest transfer (s) 0.4 28.16 1
Shortest transfer (s) 0 0 0

* Aborted (too many errors)

I really was expecting the opposite result, Node.js seems to be incredibly fast in comparison to PHP for these operations.

For the next test, we tried to focus on cpu-intensive operations by running the following algorithm that searches for the first N prime numbers (yes, they could be optimized, but the purpose of the test was to make them cpu-intensive):


var express = require('express');
var app = express();

app.get('/', function (req, res) {
    function isPrime(num) {
        for (var i = 2; i < num; i++) {
            if (num % i === 0) { return false; }
        return true;

    function display(n) {
        var count = 0;
        for (var i = 3; i < n; i += 2) {
            if (isPrime(i)) { count++; }

app.listen(3000, function () {
  console.log('Example app listening on port 3000!');


function isPrime($num) {
    for ($i = 2; $i < $num; $i++) {
        if ($num % $i === 0) { return false; }
    return true;

function display($n) {
    $count = 0;
    for ($i = 3; $i < $n; $i += 2) {
        if (isPrime($i)) { $count++; }
    echo $count;


My expectations were that PHP would perform much better for this kind of tasks. These were the results:

Result 70,000 numbers 100,000 numbers
Node.js PHP Node.js PHP
Seconds 2 26 2.5 Timed-out after
~33 seconds

I don’t know what to think anymore. I guess we are not using PHP.

by henrique coelho at September 09, 2016 12:50 AM

September 08, 2016

Catherine Leung

Summer 2016

Once again, it is almost that time of the year for school to start once again.  The summer has been an interesting one and this blog is a reflection on some of the things I did.

I had a good number of summer projects planned… but I only really got around to one of them.  This summer, I wrote a guide to using p5.js with my cousin Ben, who teaches at an international school in Hong Kong.  We wanted something that a teacher could use with their students in the classroom. We decided to write the guide using an online publisher named gitbook (I love gitbook for writing notes for my students.  Write it once with markdown, get it published to web, pdf, epub and mobi… awesome)

I had actually started this project back in February.  I got to about chapter 3 and I hated what I was doing with it.   I felt that it was very wordy, too much reading, not enough getting to the fun programming parts.  I remember learning to program when I was a kid.  I didn’t want to read about how things were done.  I didn’t care about the background of BASIC…  I just wanted to write programs to make my computer do things.

After talking things through with Ben, we decided to take a different approach to our project.  What is the minimum amount of background info/setup we need in order to get started?   How can we allow someone to write code with as little setup as possible? It turns out that we only need to write about 3 paragraphs, include a picture guide, add a link to a video and use an amazing web based editor.

Sometimes, I teach introduction to programming and the first week typically involves explaining how to set up the development environment.  It takes time to do this.  How to get the compiler.  How to get an IDE.  how to claim your unix account.  Where to find your text editor.  The joys of pico/nano (don’t laugh too hard…it was the first editor I learned how to use on unix…)…vi, emacs, gcc, vs, xcode… its a lot of setup.  I know a lot of us take this stuff for granted but think about what happens when you get a new computer… getting your dev environment set up is not a fast process.  So, how do we simplify this as much as possible?  How do we get to the fun parts as quickly as possible?

It starts by choosing tools that will minimize the setup.  p5.js is a JavaScript library.  To use it, you need to get the library files from  You need to set up an html page and you need a JavaScript file to write the script in.  After you set up your html page, you actually generally do not modify it.   You only need to edit your js file so even though you absolutely need the html page its not actually part of the program you are writing.  For tools you typically need a web browser and an editor. This is not a lot… but if you are first starting or if you are in an environment where what you are allowed to put on your machines is limited every extra thing you need to do before you start coding makes it that much harder to start.

To help simplify this setup, we decided to use Mozilla’s Thimble editor.  It is an html/css/js online editor.   It also allows you to publish your work. By doing this, we eliminate the text editor (and if you want to publish your work, we eliminated the webserver too).  Using Thimble means that the only application we need is a modern web browser.

Furthermore, and this is the really cool part, using Thimble means that we can actually setup the basic p5.js project.  Ben and I created an account on Thimble.  We then set up thimble project with all the files need (the p5.js lib file, the html file and a stubbed out  JavaScript file for people to write in).  The JavaScript file contains some starter code for the p5.js sketch.  Thimble also allows us to write our own tutorials.  Thus, we can write instructions on what to do inside thimble.  We then publish this project (one button inside thimble).  We get the link off of the Remix button from the published page and put that link into our project book.  Each chapter of our project book contains a goal (typically an image) to show what we are aiming for.  This is immediately followed by a link to the related thimble project remix.  The remix contains instructions (typically where to write the code, what to write).  In otherwords, all you need to get started is to click a link!  No other setup.

The guide then continues on with more detailed explanations for those who want to know the why for each of the topics covered.  Towards the end of the guide, I added the chapters about how to setup your own sketch outside of thimble and some background material.

There is still a lot of work to be done on our guide for sure.  Currently we have only one very basic project.  We will add more in the future but I’m pretty happy with what we have done so far.  You can access our guide here

On a more personal note, I started the summer by helping my parents out for a bit at their restaurant.  Its very different from my usual job to say the least.  My part of the work was not really hard but the hours are quite long.  All I can say is how much I respect my parents for doing it.  I know how hard  they have worked all these years to raise my brother and I.  I am forever grateful.
I am also continuing to decorate my new place.  This summer’s decorating involved the balconies, one of the best features of my new place.  I grew some strawberries, some herbs, and some cherry tomatoes (why are the leaves drying out ? there is plenty of water. help!). I even put in a couple of chairs.

I also made a few pieces of pottery this summer mostly for myself.   One of them is this garlic jar.  I am rather happy with it.


by Cathy at September 08, 2016 03:35 PM

Matt Welke

Data, Data, Data

The climax of today was getting to meet the client, engineering news publisher But while we waited until that meeting, my team mate and I re-watched some machine learning lectures from a Coursera MOOC we took. The lectures describe techniques for analyzing data and making recommendations or sorting the data.

For example, TF-IDF (term frequency-inverse document frequency) helps you recommend similar things (like news articles) based on how relevant they’re calculated to be in the context of every item in a system, and clustering helps you optimize things by pre-sorting the items based on their properties so that the recommender system won’t have to search everything to find the most appropriate recommendation.

After the meeting with the client, we learned about how they want to increase the amount of article reading their visitors do, and target the users more effectively with mailing list emails to help them drive revenue. It sounds like my team mate and I had the right idea as we prepared. We’re going to have to log as much data as we can to uncover new things we can learn about’s users, and understand their behavior. Then, we can help them refine their current article recommender system (which is built into the web app framework they use for their site), or perhaps help them build a new, more powerful recommender system able to cooperate with their current system. Sounds fun!

by Matt at September 08, 2016 03:33 AM

September 07, 2016

Andrew Smith

Everyone disables SELinux, it’s annoying!

Everyone disables SELinux, it’s annoying!

Hah! I’ve been saying that for years and years, but the quote above isn’t mine, it’s from Scott McCarty‘s security talk at LinuxCon 2016. The room was full of other Linux Pros and the statement was followed by way more nods than grimaces :)

SELinux zealotry reminds me of the free software fanatics, emacs nutters, and other such special people. Why not tell it as it is? Why tell everyone to enable SELinux when you know, in your heart, that will cause them way more trouble than it will save them from?

Thanks Scott! I feel vindicated.

by Andrew Smith at September 07, 2016 07:09 PM

Matt Welke


My first day! I was nervous, but I think I was mostly overthinking things. Today was productive. I set up my workstation and got to know some of the people at CDOT. It’s hard to do anything on our project right now because we haven’t yet met with the client, but in a way, we’re already ahead of the curve.

We learned that the client uses ASP.NET for their current application. While investigating ASP.NET, we managed to get an ASP.NET 2.0 web application created in Visual Studio running in a Linux environment using the Mono framework on Linux Mint. Our workstations are also now able to develop ASP.NET applications using an IDE called Project Rider. Mono officially supports up to ASP.NET 4.5, but for now we were only able to get up to 2.0 working. With the rising importance of using and building open source software, this bodes well for our mission to create excellent open source software to help our client.

I can’t wait to begin the project itself!

by Matt at September 07, 2016 02:18 AM

Henrique Coelho

First day at CDOT and ASP.NET on Linux

The first post on my blog already started wrong: this is not really my first day at CDOT – I previously worked in the ZAPP system (Z3 project) for 4 months, but it was my first day for the project! So here I go again, starting a new blog about my adventures on CDOT.

The orientation day is not exactly very exciting if you are not here for the first time: we discussed about the project and how to set up the workstations, practiced a little bit with GitHub, and attend to a meeting that gives us relevant information about CDOT. It was a challenge, however, to make a solid plan for our workstations: I am not exactly good with Linux (any Operating System, actually), and on top of that, the existing system for our project is written with ASP.NET an “open source” (it is open source, but certainly doesn’t feel like it) framework developed by Microsoft. Don’t get me wrong: I actually like ASP.NET: the framework itself is great, most people would agree that it is a very solid tool, liking it or not; the problem is that it is not easy to deploy in non-Windows machines without Visual Studio.

Microsoft announced ASP.NET’s open-sourceness (that’s how I call it) in 2014, which I think is a great move; but even after almost 2 years, there are many good, but no excellent way of deploying it without Windows. Probably the most acceptable solution for this is Mono, which is free and open source! The drawback of Mono, however, is that it is still not able to deploy the most up-to-date versions of ASP.NET – with “most up-to-date” I mean versions from the last 4 years: the latest version of ASP.NET is 5.0, while Mono seems to only accept 4.5 projects.

Another problem, besides deployment, is the IDE: for frameworks like .NET, it is important to have an actual IDE for your project (vim will not take you very far: it is super fast if you know it well, but I don’t feel like the biggest constraint in programming is our typing speed); up to now, there are no IDEs to match Visual Studio and its ASPness, unfortunately. Oh, except for an excellent, open-source, cheap (free if you are a student), lightweight, snappy and fantastic one made by JetBrains (which also has IDEs for C, C++, Java, JavaScript, Ruby, Python, PHP, iOS, and SQL – all with similar qualities): the IDE is called Rider – it is still under development, but stable enough to be used. I’m using it.

We are feeling optimistic about this project: despite Mono not being able to run the latest version of ASP.NET, it is likely that the current application wasn’t built with it (we still don’t have access to it). We were also successful installing Mono and Rider in our machines, meaning we can be (hopefully) very close to deploy the system in a Linux environment. Developing .NET in such an environment was an alien concept some years ago. We also downloaded an ASP.NET virtual machine from TurnKey GNU/Linux, which can come in very handy for deployment and testing.

Now we shall wait. Hopefully we will have access to the actual system soon, so we can deploy it and really start working on our project.


by henrique coelho at September 07, 2016 12:23 AM

August 27, 2016

Laily Ajellu

How to avoid the top 15 Accessibility Mistakes

When designing an accessible website, there are some common mistakes or misconceptions you may run into. This post describes how to avoid those mistakes so you can start off your accessibility development correctly.

It's always easier to design and develop with accessibility in mind, rather than to add it in at the end. This is because it's more of a question of UI.

If you design a beautiful, yet inaccessible site and develop it, you've wasted a lot of time on something that has to be re-developed.

The key is to design a beautiful, and accessible site from the get go, so you don't have to re-code everything with a new design.

  1. All images, whether decorative or informative, should have an alt text. Refer here for how to implement the alt property: Alt Property
  2. Allow a user to use the keyboard to do the same things a mouse user can do. If you can click on something, you should be able to use enter and space to do the same thing.
  3. Don’t trap users in a keyboard navigation loop. Make sure the user can get to all the components by tabbing and other shortcut keys. Test this thoroughly to make sure you can get to all components.
  4. If dynamic content shows up on a webpage tell the user it just appeared.
    Eg. When you choose your country in a form, and a list of cities is loaded based on the country chosen. The user should be notified that this new dropdown menu has appeared on this page.
  5. When a user navigates using the keyboard, tell the user what they have landed on. Is it a settings button? An input field for your name?
  6. When you tell the user what they have landed on, don’t just tell them it’s a button, tell them what the button does. If you have 20 buttons on a page the user is just going to hear “button button button button ...” which doesn’t give the user any context at all!
  7. Don’t use tables to structure your page (old way of web dev), because the user will think they’re in something like an excel spreadsheet.
  8. When navigating data tables, the user should know what column heading and row heading they’re at, and the value of that cell.
  9. Use headings to indicate what this part of the page is about. Are they at the nav bar? Are they at an article to be read? Are they at a menu where they’re supposed to choose an option?
  10. Don’t use only color to convey info. For example, telling the user:
    "As you can see from the highlighted part of the code, this is the proper way to use a <button> tag"

    Which refers to this code somewhere on the page:

    Accessibility users may not be able to see yellow.
    You can restate the part of the code in aria you want to refer to:
    "This is the proper way to use a <button> tag:
    <button type="button">Click Me!</button>"

  11. Use captions for video audio. Also, use captions to describe images.
  12. When an accessible user tabs to a component and chooses an option, for example: “Mute Audio”, don’t reset the tab order to the beginning of the page, restart the tabbing where they left off. (At the “Mute Audio” button)
  13. Allow to the user to skip over a navigation element easily, especially if it exists on all the pages of your multi-page website. Also, allow the user to skip to different sections of the page.
  14. Give the page a title. The URL will be read, but this is often not as clear as a simple title indicating to the user what page they’re on.
  15. Do not create another page for accessible users and ask them to use it instead, because this segregates them from other users which is inhumane.

Credit to Todd Liebsch for describing most of these common mistakes!

Please leave questions and comments below :)

by Laily Ajellu ( at August 27, 2016 07:34 AM

August 25, 2016

Matthew Marangoni

Recent Changes to Settings Modal

A number of changes have been made to the settings modal, and it now has finally been merged with the BigBlueButton HTML5 master branch.

These changes include:

  • the settings modal no longer contains a submenu option for logging out of the session. This has now been moved to a smaller menu which appears prior to opening the settings modal (added by Jaeeun). As a result of this we were able to remove some css and also reorder the tabindex.
  • the min-height of the modal and of the submenu rows had been adjusted so that the submenu options no longer scroll and that the rows line up evenly with the text in the list of submenus
  • checkboxes in the submenus now have appropriate spacing so that they are not directly touching the scrollbars (if they appear)
  • the structure of the files for each submenu was fixed - each submenu now is containted within its own subfolder and renamed to component.jsx to follow the file structure of the project. This required revising a few import statements in some other related files.
  • Some leftover css properties were still referenced in the HTML, but the css classes no longer existed so they were removed.
  • all import paths were changed to relative paths instead of absolute paths to resolve a conflict with a pull request from Alex
  • all inline css was removed and replaced with proper classes throughout the settings modal
  • the previously used "hidden" css class for hiding content on screen (since it was only used for screen readers) has been replaced with the "hidden" HTML attribute which is cleaner and achieves the same end-goal

Some of the changes can be seen below:

Some things still which need to be fixed include having the font size in the application submenu detect which font is current after changing the font and then submenus, some buttons were not darkening properly on hover, and FormattedMessage needs to be implemented for translation purposes.

by Matthew Marangoni ( at August 25, 2016 11:19 PM

Anderson Malagutti

Convert RFC3339 date (Youtube’s API format) to PHP DateTime

I’ve been trying to run some tests with the Youtube v3 API using PHP, then I came across to the RFC3339 date format that this API uses.

The main problem was when I needed to ‘parse’ the result from a video’s duration time, the API’s return for the following parameter was just weird.

For example, a video would return me ‘PT2H34M25S’ as its duration time.

Finally with some search I could solve it using PHP’s DateTime class.

The following code did it for me, hope it helps:

public function convertYoutubeDurationTime($_youtubeDurationTime = 'PT2H34M25S')
     $time = new DateTime('@0'); // Unix epoch
     $time->add(new DateInterval($_youtubeDurationTime));
     return $time->format('H:i:s');

This function would return ’02:34:25′ which is the video’s duration time.😉


A special thanks to this post on stackoverflow that helped me:

by andersoncdot at August 25, 2016 03:15 PM

Jaeeun Cho

Implement dropdown components for reusability

I've implemented drop-down components for settings with using modal dialog.

However, there are some parts which need to use drop-down stuff on html5-client, so I changed drop-down to be reusable component.

My Component Structure :


<DropdownTrigger> can be a trigger which open the drop-down menu like button.

<DropdownContent> can be lists of the menu.

In case of the settings menu, the horizontal three dots is <DropdownTrigger> and the three menus are <DropdownContent>.

<Dropdown> Component : 
 This component controls that the menu is opened(showed) or closed(disappeared) by using clicking button or key-down event.
  It also close the drop-down menu automatically when user click outside of the menu.
  For key-down event, drop-down menu is only allowed to use enter key, space bar and down arrow key.

<DropdownTrigger> Component : 
  In this component, button is used as a trigger. So, a developer can pass icon name or label of button as a property. 
<DropdownContent> Component : 
  With this component, the menu list is just shown when user click the button(trigger).

<SettingsDropdown> Component:
  This component includes the specific information about the menu such as the title, icon, tab-index for keyboard control, and others.
  There are a function related to keyboard control in this component.
  In this component, a developer can import <Dropdown>, <DropdownTrigger> and <DropdownContent>.

  Drop-down menu also includes accessibility stuff for button and the list. Button has aria-haspopup, aria-labelledby and aria-describedby.

  * aria-haspopup : Indicates that the element has a popup context menu or sub-level menu.
  * aria-labelledby Identifies the element that labels the current element. It provides the user with a recognizable name of the object
It provides the user with a recognizable name of the object. It provides the user with a recognizable name of the object. It provides the user with a recognizable name of the object.
  * aria-describedby Identifies the element (or elements) that describes the object

  For menu list, each menu also has aria-labelledby and aria-describedby. It has paragraph tag for label/description of the menu.

A developer can use <FormattedMessage> component to be read by screen reader. So, description is message for developer, and defaultMessage is for screen reader. This message is also declared at en.json which is English version string.

by Jaeeun(Anna) Cho ( at August 25, 2016 04:30 AM

August 24, 2016

Laily Ajellu

Introduction to ARIA for HTML

Why care about Accessibility?

Have you ever tried to use a website with your eyes closed, or with the screen turned off? You have no context of what is going on or what you’ve clicked. People with disabilities use screen readers - apps that read out the screen to you.

In the beginning it can be a nightmare of overlapping words and vague descriptions like “button”, leaving you with no idea what the button does. But a properly coded website labels its buttons and other components so you hear something like: “signout button” instead.

Isn’t that clearer?

Who is the Target Audience?

  • vision-impaired users

  • dexterity-impaired users

  • users with cognitive or learning impairments

How do I Start Coding?

ARIA - Accessible Rich Internet Applications provides a syntax for making HTML tags (and other markup languages) readable by screen readers.

The most basic aria syntax is using roles. Roles tell the screen reader what category the tag belongs to - eg. a button, menu or checkbox.

Using Roles

In HTML, use elements with specific usage. Don’t just use a div if you need a checkbox, that way it will have some accessibility features already built into it.
eg. <input type=”checkbox” role=”checkbox”>
not <div role=”checkbox”>


  • The role of the element more important than the HTML tag it’s on

  • Do not change a role dynamically once you set it, this will just confuse your users

What’s Next? Establish Relationships

These are the aria attributes that establish relationships between different tags:
  1. Aria-activedescendant
  2. Aria-controls
  3. Aria-describedby
  4. Aria-flowto
  5. Aria-labelledby
  6. Aria-owns
  7. Aria-posinset
  8. aria-setsize

Aria-describedby & Aria-labelledby

  • Explains the purpose of the element it’s on
  • Most commonly used, and most useful
  • Create a paragraph tag with the label/description info and place it somewhere off the page

CSS recommended:

The great thing is that you don't' have to add any css to hide the paragraphs pointed to by aria-labelled by and aria-described by.
All you have to do is add the property `hidden` to your html tag!
Reference: Hidden attribute

Code Example


  • Shows which child is active
  • Must be on a visible element
  • Must be someone’s descendant
  • Or must be owned by another element using aria-owns

eg. on a textbox inside combo-box

Code Example


  • If you click or change the value of one element, it will affect another

Eg. If you click a button Add, number will be increased by 10

Code Example


  • Indicates which element to look at/read next
  • Doesn’t affect tab order
  • Only supported by FF and ie
  • Reads flowto only when you press = key so it’s not very useful
  • Can flow to > 1 element

Code Example


  • Indicates who the parent of a child is
  • Do not use if parent/child relationship is in DOM
  • A child can only have 1 parent

Code Example

Aria-posinset & Aria-setsize

  • Indicates the position of an item in a set
  • Don’t use it if all the items of the set are already present (browser calculates)
  • Number of items in the whole set

Code Example

Change Aria Properties Dynamically (except Roles!)

  • Eg. Aria-checked on the chosen checkbox
  • Keeps the user up to date with page changes

Make Keyboard Navigation Intuitive

  • Enable navigation using up and down arrow keys
  • Enable select with space and enter

Review of Aria Process

  1. Choose HTML tags that are more specific to your needs
  2. Find the right roles
  3. Look for groups and build relationships
  4. Use states and properties in response to events
  5. Make Keyboard Navigation Intuitive


by Laily Ajellu ( at August 24, 2016 08:36 PM

August 21, 2016

Laily Ajellu

Which WAI-ARIA Docs Do I Need to Read for my Profession?

There are many WAI-ARIA docs, but you don't need to read all of them.
Which one you read depends on your job, are you a Web Page/App Developer? An E-books or E-documents author? A Web Browser Developer?

This post will outline which docs you should read according to your job, and gives a sample of each doc's contents.

For all Web Professionals


    Specifies accessibility attributes that
    • a browser developer should recognize from code it interprets
    • a web page developer should put in their code

  2. Accessible Name and Description

    Describes how the screen reader determines what to say, and how to manipulate it so that it's easy to listen to.

    For example:

    It describes how
    • aria-labelledby
    • aria-describedby
    • aria-label
    • HTML5 label ta
    are used together to form a jumble of words for the screen reader.

    The keys is to make this as close to human language as possible, for a comfortable user experience.

For Web Page/App Developers:

  1. Graphics WAI-ARIA

    How to add roles so that a screen reader can understand graphics and images.

    For example:

    role="presentation" tells the screen reader that the element it's on is just used for visuals.
    The screen reader then won't put it in it's element tree and ignores it.

    Code example:

    The following code was written by Matthew Marangoni, find his Github here.

    How it Renders:

    Notice that it doesn't render any differently, as with all ARIA attributes. The div just isn't recognized by the screen reader anymore, because the programmer has indicated that it's not required for understanding the content or logical flow of the page. (It's just for presentation)

  2. WAI-ARIA Authoring Practices

    Extends WAI-ARIA written for All Web Professionals above. Authoring Practices focuses on how Web Page/App Developers should implement their code.

For Authors of E-books or E-documents:

Digital Publishing WAI-ARIA

  1. Has attributes you can use like:
    • doc-chapter
    • doc-credits
    • doc-glossary
    to structure the webpage similar to a book, so that accessible readers can access each section easily.

For Web Browser Developers:

  1. Core Accessibility API Mappings

    Specifies how each attribute should affect the browser, and how the browser should communicate with screen readers and other accessibility devices.

  2. Digital Publishing Accessibility API Mappings

    Extends Core Accessibility API Mappings above to include instructions for how to communicate code written by authors of E-documents to accessibility devices.

  3. HTML Accessibility API Mappings

    Extends Core Accessibility API Mappings above to include how to communicate code written in HTML by the webpage developer to accessibility devices

    For example:

    Browser must respond to:
    • Keyboard focus
    • Native HTML feature
    • WAI-ARIA role, state and property attributes

  4. SVG Accessibility API Mappings

    Extends Core Accessibility API Mappings above to include how to communicate code written using SVGs (a type of image) to accessibility devices

    For example:

    Browser must define how:
    • Charts
    • Graphs
    • Drawings

    act when they:
    • Have keyboard focus
    • Are implemented using native SVG features
    • Have WAI-ARIA role, state and property attributes

    Difference between SVG and other image extensions

    However much you zoom into an svg, the image will stay crisp

Now that you know what you need to read for your profession, it lowers the accessibility learning curve!
So go out and make your webpage accessible!

by Laily Ajellu ( at August 21, 2016 03:54 AM

August 18, 2016

Matthew Marangoni

Adjustments to Settings Modal

All keyboard controls are now fully functional! There are also a few minor improvements which have been made to the settings modal. These include:

  • Renaming instances of className to componentName within the settingsModal file to follow the project's coding conventions

  • Destructuring the FindDOMNode statements and instead creating a function to call the focus method on an object. This is to declutter the code and avoid any repetition, so that if changes ever need to be made they only need to be changed in one place (the setFocus method)

  • Adjusting the minimum modal height so that it is large enough to display the entire submenu without a scrollbar, but still may require a scrollbar for a submenu's content

Some styling issues that still need to be resolved:
  • Some buttons do not darken on hover in firefox, but do in chrome - may need to implement onhover rules in the scss to force the behaviour for these buttons.
  • Some icons in the submenu's list do not render properly in different browsers - I have not been able to reproduce this issue yet, however it may be an issue with the icon files themselves. I will be testing with a newer set of icon files soon.

by Matthew Marangoni ( at August 18, 2016 10:37 PM

August 11, 2016

Laily Ajellu

Binding in the Constructor

Recall that we need to bind the correct context to event handlers when we use ES6 class syntax.(In earlier ES implementations `this` is automatically bound to the event handler.)

For example, if we left our implementation without the bind we would not have access to any of the SlideControls props, and `this` would refer to the Button, not the SlideControls.

Another way to bind the context is to use arrow functions.

Two Binding Methods:

Although we can simply add bind to every function that needs a reference to the element it was called from, it's not good practice.
This makes the code look pretty messy and long when you have many elements.

React "recommends that you bind your event handlers in the constructor so they are only bound once for every instance." (React Reference Doc)

Example of Binding in the Constructor:

All you're really doing is setting this.previousSlide to its bound version, creating a shorthand. So go on! Bind in your constructor.

by Laily Ajellu ( at August 11, 2016 04:04 PM

How to use Meteor's Settings.json

Security for your keys

With the rise of big-wig social media apps being hacked in past few years, security is vital for frameworks like Meteor.
So, Meteor provided Settings.json, a file to store API keys and other config info that can choose to hide from the client.

What are API Keys?

  • When your program uses another application, it communicates with that application's API.
  • It communicates using an API key, a password that identifies the calling program, its developer, or its user to the Web site.
  • used to track and control how the API is being used
    • eg to prevent malicious use or abuse of the API

Reference: Wikipedia Defines API Key

Does BigBlueButton use API keys?

BigBlueButton’s HTML5 client doesn’t actually use any API keys. We’ve configured the API our app uses, Redis, to listen on the localhost of our main server so we don't have to use a key for authentication.

BBB is also a single server solution, meaning that all dependencies and components are on the server too. So it never has to send out any requests over the network to communicate with them. They’re all right there in the same home.

How do I use Settings.json?

It’s use is really simple: it’s just a JSON file, ie. a file with key-value pairs.
Here’s an example:

magicPizzaService is going to only be available on the server and can be accessed in your javascript with:

By default, anything we put inside of our object is accessible only on the server.
If we do want a key or configuration value to be available on the client, we can simply wrap it in an object named public:

But don’t worry, your unwrapped key is still private! You don’t even have to wrap it with private, but you can if you want!

The keys wrapped in public and private can be accessed in your javascript like so:
Notice in the 3rd example that you don't need to do settings.private.keyName, because it is the default.

How to load the settings

In BigBlueButton, since we startup Meteor via command line, we use this command to make our settings available and run the app at the same time:

>>meteor --settings settings.json

Splitting settings.json

Another security concern is the values you need in development vs. production. When testing and developing, you should use API keys linked to dummy accounts(especially when users or our product has to pay money for the services.

For example, in BBB, we want to have an HTTPS connection only in production. (and use HTTP in development). We can store a different value for each mode, prod and dev.

Just remember:

*NEVER COMMIT settings-production.json*

Never ever. You don’t want your keys to be stolen from your published source code. Even if your repo is private, Github, Bitbucket, and other source code storage tools are still vulnerable to hacking themselves!

Josh Owens’ blog post on the subject really hits the point home with this Github search for S3 keys.

The easiest way to deal with this is to put settings-production.json in .gitignore file (the file GIT uses to know which files to hide from commits).

Follow these steps separate dev from production:

  1. separate settings.json file into two files: settings-development.json and settings-production.json

  2. Put values for testing in settings-development.json , and values for real end-users in settings-production.json . Notice the two files are identical except for the values!

Load one of the two files when starting up Meteor:

  1. >>meteor --settings settings-development.json
  2. OR
  3. >>meteor --settings settings-production.json

How are we using this in BBB?

This solution above didn’t fit all of our needs, we wanted to:
  1. Have a shorter way to launch into dev (since we are in dev most of the time)

  2. Have separate files for our public and private values, not just wrapped separate objects in the same file

  3. Have common values for dev and prod without having to duplicate our code across the two files

We found that this package does all of that! 4commerce:env-settings

And its installation was straightforward as well.

This suited our needs exactly! And given the configuration info that we want to store in our files, this is part of the directory structure we chose for BigBlueButton:

This way, our Javascript code stays free of config values and we can simply use them wherever needed.
Although we're not storing any sensative data, it localizes, organizes and reduces our overall code!

Credit for images and Meteor content Meteor Documentation - Making Use of Settings.json

Please feel free to leave comments or questions below :)

by Laily Ajellu ( at August 11, 2016 04:04 PM

August 04, 2016

Laily Ajellu

Binding in the Constructor

Recall that we need to bind the correct context to event handlers when we use ES6 class syntax.(In earlier ES implementations `this` is automatically bound to the event handler.)

For example, if we left our implementation without the bind we would not have access to any of the SlideControls props, and `this` would refer to the Button, not the SlideControls.

Another way to bind the context is to use arrow functions.

Two Binding Methods:

Although we can simply add bind to every function that needs a reference to the element it was called from, it's not good practice.
This makes the code look pretty messy and long when you have many elements.

React "recommends that you bind your event handlers in the constructor so they are only bound once for every instance." (React Reference Doc)

Example of Binding in the Constructor:

All you're really doing is setting this.previousSlide to its bound version, creating a shorthand. So go on! Bind in your constructor.

by Laily Ajellu ( at August 04, 2016 08:47 PM

July 28, 2016

Laily Ajellu

Using Multiple classNames for One Element - React

Adding many css classes to an element can be very handy. You can combine styles while keeping your code from becoming redundant. For example, how can you implement a button that needs two classes? There are three methods:
  1. Using className={} and style={}

  2. Using cx(), an old React function that's now depreciated

  3. Using classNames(), a new function by JedWatson recommeneded by React

Using Only one className

You can use one classname and add more css using the style attribute. But this way there are two different places for css, making it messy.

Using React's classSet - Now Depreciated

Although this version is depreciated, it's interesting to investigate how it works because it's still in some "legacy" code. (And by legacy I mean just a few years ago.)

We import the function cx which takes as many classnames that you would like to add. Simply pass the classname to the function.

What is 'styles'?

Notice we've also imported styles from ./styles.css.
This imports the styles object from a css file you've written. The styles object doesn't need to be explicitly declared. It contains all the classes declared in the css file, without needed to wrap them in anything.

If we use console.log, we find that styles.zoomForm is a long string that represents the path to the style.

We can even explicitly use the string to prove that it will work.

Using JedWatson's classNames - Recommended by React

As mentioned in the: React Documentation

This is the package that is the most recent solution to classnames and should be used by all React developers. It's usage is very similar to cx. Find the usage here: JedWatson classNames

The great thing about this is that you can use conditional classNames by using an object to wrap the classname (using key-value syntax) and a bool.

You can even store classnames in a prop and dynamically add them!

Here we're defining two custom props, color and circle, using propTypes and defaultProps.

In _getClassNames(), we first extract color and circle from this.props (so that we don't have to write this.props.color).

Then we define an empty object propClassNames with two attributes.

The first attribute is a css class styles.default (resolved from styles[color] - see defaultProps), set to true.

The second attribute is a css class, set to false by default (see defaultProps)

In render(), we extract className from this.props.

className is a not a custom prop, it's an attribute that can be used on any tag like so.

Back in render(), we then return our component and pass three classnames, two from this._getClassNames() (default & circle) and one from className (prevSlide in this case). Giving our element all three styles!

For more tutorials of the different methods, see: Eric Binnion's Post and Randy Coulman's Post

by Laily Ajellu ( at July 28, 2016 05:32 PM

Matthew Marangoni

Accessibility and Key Handle Events

In many cases for both accessibility and convenience, a user should be able to navigate through a menu using only the keyboard. To meet ARIA specifications, this is a requirement as not all users are capable of using a mouse to make selections.

As stated in the ARIA documentation:

 "Navigating within large composite widgets such as tree views, menubars, and spreadsheets can be very tedious and is inconsistent with what users are familiar with in their desktop counterparts. The solution is to provide full keyboard support using the arrow keys to provide more intuitive navigation within the widget, while allowing Tab and Shift + Tab to move focus out of the widget to the next place in the tab order.
A tenet of keyboard accessibility is reliable, persistent indication of focus. The author is responsible, in the scripts, for maintaining visual and programmatic focus and observing accessible behaviour rules. Screen readers and keyboard-only users rely on focus to operate rich internet applications with the keyboard."
Within the setting modal, we use a menu that is always being displayed to the user, and each submenu is set as an ARIA menuitem by giving it the attribute role='menuitem'. Within this menu, arrow keys and spacebar enter do not function by default so keyhandle methods must be added manually, and called using onKeyDown within your menu list items.
This requires keeping track of two variables: the active menu and the menu currently in focus. The expected behaviour is for the down key to shift focus to the next submenu, the up arrow to shift focus to the previous submenu, and the spacebar/enter keys to set whichever submenu is currently in focus as the active menu. Each time an up or down arrow is pressed, the focus menu variable must be incremented or decremented, and it is important to remember to keep these variables within the bounds of the menu - you don't want to decrement the variable when up is pressed at the beginning of the menu and similarly you don't want to increment the variable when down is pressed at the end of the menu. Instead logic must be added to enable the user to cycle through the menu from the beginning or end depending on which key is pressed.
Other things to keep in mind are when using both the Tab keys and arrow keys together, you don't want to have your variables fall out of order that are tracking the focus position. Tab will automatically shift focus to the next element in the DOM order, or whatever is set in the tabIndex so it is not necessary to write additional code to set the focus as needed in the down arrow key event. It is required however to add the logic where pressing tab within the menu will increment (or decrement with Shift+Tab) the focus menu variable. Additionally, it is possible for the user to tab out of the menu, whereas with arrows it is not. Finally, logic must be added to handle the case where the user tabs all the way through the settings modal and back to the beginning of the menu (or Shift+Tab's to the end of the menu), the focus should then be reinitialised to the start or end of the menu.

by Matthew Marangoni ( at July 28, 2016 03:03 AM

July 20, 2016

Laily Ajellu

How to pass Value to an Event Handler (React.js)


If you have a drop down menu and you want to call a function when a different option is selected (eg. slide 2), you can use the onChange attrbute.

If you then want to pass that option's value (eg. 2), you don't actually have to pass it as a parameter. The value is passed automatically in an object called event.

event is the object that has all the information about the event, ie. the user chose another option. event has a property called target, which returns the element that triggered the event.

In this case, target refers to the select tag, so gives you the value that was chosen (eg.2)


You may be wondering what bind does and why we need it. bind ties together this and the skipToSlide method, so that this (when inside the method) refers to the select tag rather than the entire object that the method belongs to.

by Laily Ajellu ( at July 20, 2016 09:28 PM

July 18, 2016

Milton Paiva

Missing NTSYSV command

I am looking for alternatives for the ntsysv command, once after the introduction of Systemd on Fedora, the #ntsysv doesn’t do all the job anymore.

Suggestions are welcome.

by miltonpaiva at July 18, 2016 12:30 PM

July 14, 2016

Andrew Smith

Awesome Student Checklist

I vaguely remember there was a time when I had a one-page resume. I also remember that time didn’t last long. I’m baffled when I get resumes how many of them are two bullets long, especially in our industry (software).

Lots has been said and written about how it helps your professional career to work on open source (especially when you’re just getting started), so I won’t spend any time on that. What I want to rant about here is more broad than that.

How is it possible that someone (presumably at least 20 years old) has spent their entire life without doing any of the following?

  • Work on some interesting personal projects.
  • Join or start a club.
  • Participate in an interesting online community,
  • Volunteer.
  • Learn something you weren’t told to learn.
  • Try something you weren’t told to try.

I understand that as a student you don’t have relevant paid work experience. Of course you don’t, that’s why you’re a student. But really, you’ve done none of the above? And you expect me to give you a job? No thanks, I’d spend less time doing the work myself than I would holding your hand and telling you what to do.

Given the number of empty resumes I receive I have to wonder how many of them are actually good candidates who have done all kinds of interesting stuff but were told not to put any of it on their resume, cause it wasn’t a paid job. Let me tell you something – I don’t give a rat’s ass about how much money you’ve made in the past. I’m not hiring a CEO. I’m hiring engineers, who need to have interest in their field. I am looking for people who can think for themselves. I’m looking for evidence that you want to do this kind of work, and you have at least tried to do something independently. If you have that – at least there’s some hope that you’ll do well on my team. Without that – forget it, don’t bother applying.

As an example, here’s the type of resume I get excited about. Yours might be a page and a half instead of 4. But notice even with a decade of experience how much of my resume is various unpaid work. It’s all relevant! If from all that there are two things that jump out at an employer – that will put you two steps ahead of someone else.

There are lots of students out there. Show me what makes you awesome, and don’t pay attention to people who tell you to have a one-page resume. Those people don’t do any hiring. If they did – they would punch themselves in the face for giving such terrible advice.

by Andrew Smith at July 14, 2016 05:50 PM

July 12, 2016

Yunhao Wang

Pure CSS Cube




#wrap {


border:2px solid black;

transform:translateZ(-100px) rotateY(180deg);
transform:translateX(-100px) rotateY(-90deg);
transform:translateX(100px) rotateY(90deg);
transform:translateY(-100px) rotateX(90deg);
transform:translateY(100px) rotateX(-90deg);

@keyframes cube{

by yunhaowang at July 12, 2016 01:07 AM

July 07, 2016

Jaeeun Cho

Dropdown list for setting menu

I'm working on the dropdown list for setting menu.
When a user clicks setting button, it will be shown as dropdown list, not modal window.

So, I googled it to figure it out how to work at React and examples.
Well, I found lots of examples and most of them are provided as a package like react-menu, react-dd-menu, rc-menu, and others.
I downloaded it to check how it works.
But everything that I set up on my test server, they gave me an error.
So I couldn't do with that.

My times were totally waste of times.

Somehow, I did implement that the dropdown is shown when I click the button.
But, this dropdown is shown at the bottom of the window.
So, I'm studying animation on React(
and figuring css out with this example (

by Jaeeun(Anna) Cho ( at July 07, 2016 10:19 PM

Matthew Marangoni

Keyboard Navigation in Settings Modal using React

I've been working on the setting Modal to enable keyboard controlled navigation in the event a user cannot use a mouse. Tabbing controls have already been completed, but other key functions are still a work in progress. A user should be able to cycle through the list of menus using the arrow keys, and select that menu by then pressing the 'enter' or 'spacebar' keys.

At this time I am able to cycle through the menu with arrows, but this currently also sets the menu it cycles to as the active menu at the same time. I'm having difficulty making the arrows only focus on the menu list elements rather than make them active, which is a result of my limited React experience. Additionally, the spacebar and enter keys currently only have limited functionality in the settings modal - they work everywhere except the menu list.

I am currently studying the React docs to determine the best way to implement making these keys functional. The documents I have found to be most helpful so far are listed below:

by Matthew Marangoni ( at July 07, 2016 09:13 PM

June 30, 2016

Jaeeun Cho

Complete my PR and logout confirmation modal.

My PR was finally merged yesterday on git repository. To my shame, I didn't know well the way that I implement the code simply and exactly. I also had mistake like missing space or semicolon, although I did check my code with lint. I should learn and study about that according to practice coding many times and taking a look others' code. And I should be more methodical person about my work.

I implemented a confirmation modal for logout.
In current code, a modal is already used for Setting menu. So I tried to use the same function and style with it. However, a modal for confirmation was shown behind of Setting menu, not in front of it and I couldn't click any button on confirmation modal.So I implemented another modal for confirmation.

This modal is shown according to double click of Leave Session menu. Setting menu lists are called from the array(submenus) so I compared with class name of menus, not index. Because the menu order can be changed.

Although my PR was merged, I'm going to test my logout process.

by Jaeeun(Anna) Cho ( at June 30, 2016 09:38 PM

Matthew Marangoni

Style Conflicts with Screen Readers

As the settings modal nears completion, a few changes were made to better adapt to screen reader users. While adding screen reader descriptions to all the elements, it became apparent that some of the interact-able portions of the settings modal were redundant and could be removed. Some of these elements included the "Done" button of the Video menu (it served no apparent purpose - same function as other "Done" button in the menu), and the lock layout option in the Participants menu (the HTML5 client will not support user adjustable layouts).

The font size adjustment bar in the Application menu had to be reworked - the way it was currently styled was causing the font-bar to not fit perfectly within the Application submenu and added unnecessary scrolling as a result. The font-bar still has an issue with the way the + and - resize buttons are being styled, and it appears this issue can only be fixed by fixing the icon itself as well as the way the button element handles those icons. Currently unnecessary padding is being added around the span that contains the icon as this is the default behavior for all other buttons, but in this case does not match the design styling. See below:

 The menu options on the left of the settings modal, as well as some elements in the right menus require being reworked as well. Currently, they are written as unordered lists, with list items. Normally this would be fine, but because a screen reader will detect these list elements they will read out to a blind user as a list, which may confuse the user. As a result, any list items that are not actually meant to be lists must be converted into styled div containers instead. This is much more tedious than making a list, but allows for more control over the element behavior.

by Matthew Marangoni ( at June 30, 2016 03:57 PM

June 23, 2016

Matthew Marangoni

First week at CDOT

During my first week at CDOT, I completed varying setup tasks. Initially, I had to salvage computer parts from other unused machines and assemble my own. This was fairly easy, the only difficulty was finding a adequate graphics card as most machines were missing them already. Once I had my machine built, I then had to format the HDDs and install Windows 7 on my machine (this took a considerable amount of time as there seemed to be endless windows updates, driver updates, and reboots) but it did eventually finish the following morning.

Once complete I was able to begin the initial setup of my VM environment which involved tweaking a few NAT configurations and then installing Ubuntu 14.04 in my VM environment. All of this seemed to work just fine so I proceeded with the BigBlueButton install. This install went reasonably smooth but did take a day or so to complete. I followed the instructions from the BBB docs step by step, but there were a few slightly outdated steps which gave me unexpected results that were known to my colleagues, but not yet documented. Some portions of the BBB install took longer than others, so I used this time to review other documents (BBB HTML5 design, development, etc) to better familiarize myself with the Big Blue Button project. Following this, I then proceeded to set up the BBB developer environment with no issues.

Currently I am working on a few tutorials to get up to speed with the BBB project and working environments. I just completed the Meteor tutorial and am about to begin reading over a bit of the Meteor documentation. Later I will move onto the REACT, ES6, and Mongo tutorials and documentation.

by Matthew Marangoni ( at June 23, 2016 03:58 PM

Screen Readers & Browser Compatibility

After beginning to add ARIA labels and descriptions to the settings menu content, I soon ran into a problem where although all my attributes were being added to elements correctly and could be seen in the DOM, at best only partial information was being read back by the screen reader (at the time, I was debugging in Chrome using ChromeVOX). I decided to start with the bottom layer of the application which contained the settings button that opens the modal as it had the least amount of complexity. I was able to add ARIA features to this button easily, but any features I added into the settings modal were not being spoken back to the user.

My first thought was that aria-labelledby was working better than aria-describedby and that I could use one over the other (they function almost the same), but both would be needed regardless and still both were not working in a few places so this was not a solution. Later I thought perhaps my content wasn't being read because the referenced div elements were not in the right locations and weren't being seen, so I moved the containers around in various places through various files again to no avail. I then tried changing the CSS class for the containers which were to hold aria labels and descriptions to see if perhaps the way I was hiding these elements from view was causing it to go undetected by the screen readers. Still this did not fix the problem, although I was able to find a better method to hide content using CSS. That information can be found here:  and the CSS alone is:

.ariaHidden {

After days of rewriting my code and searching for solutions and best practices, I came to the conclusion that my code was not the issue, everything was following the aria standard guidelines correctly. I decided to try debugging using a different browser and different screen reader to isolate the issue and lo and behold, everything worked as intended on Firefox with NVDA. The issue all along was with the screen reader and browser compatibility.


As it turn out, Chrome has the worst ARIA support compared to Firefox, IE, and Safari (I'll be debugging in Firefox from now on), and while the ChromeVOX extension is nice, its still very much a work-in-progress and falls short of other screen readers like NVDA and JAWS. If you'd like to see which browsers have the best ARIA implementation, this document does a good job of detailing and visualizing that.

Now that my content was being read back to the user, I could finally start making some progress with accessibility. I soon ran into a new issue however. Certain elements within the settings modal were reading out the word "section" multiple times in quick succession. I couldn't determine which elements these were coming from as I hadn't added the word section to any elements so I deduced that the screen reader was reading out empty div containers that were being used for styling only.

According to ARIA spec, the proper way to hide an element from the screen reader is to use the property aria-hidden="true". I tried implementing this in every surrounding div container I could think of where I was experiencing the issue, but once again nothing was solving the problem. Luckily I found an article that described the exact issue I was experiencing, along with the solution. Once again, this comes down to the issue where ARIA is not equally supported across all screen readers and browsers, and using Firefox with NVDA does not support the aria-hidden="true" attribute (ironically it would have worked fine with Chrome and ChromeVOX and I would have never realized this was an issue were I still debugging with it). The alternative is to set the role to presentation as such: role="presentation". ARIA describes the presentation role attribute as:

presentation (role): An element whose implicit native role semantics will not be mapped to the accessibility API.
The intended use is when an element is used to change the look of the page but does not have all the functional, interactive, or structural relevance implied by the element type, or may be used to provide for an accessible fallback in older browsers that do not support WAI-ARIA

This method also does not work for every browser and screen reader combination, so the best solution is to include both aria-hidden="true" and role="presentation" anywhere you will be using an element whom's only purpose is to style the page. The article which details this problem and solution further, and provides many test cases can be found here:

I will continue working on making the settings modal accessible, and documenting any issues I come across along the way.

by Matthew Marangoni ( at June 23, 2016 03:55 PM

Jaeeun Cho

My first pull request at git repository.

I worked with HTML5 client logout from the second week of Jun.

After finished to implement, I tried to send pull request at git repository on last week.

However, I had some problem log-in part after I did git stash and fetch, merge from upstream/master branch.

According to console log, the meeting room could not be created and the 'validated' column of Users was set to false. When user logout, the user could not log out normally.

I could not find where the error was for the first time. However, I compared with the file from my laptop, I found the error. The error was in eventHandler.js at meeting.

Before merge with upstream/master, the code was..

eventEmitter.on('get_all_meetings_reply', function (arg) {

After merge with upstream/master, the code was changed to

eventEmitter.on('get_all_meetings_reply_message', function (arg) {

This message is coming from MessageNames at akka-bbb-apps, the version between mine and server was different.

After checking everything, I sent pull request at git repository for the first time.
I got comments a lot about my PR and I fixed everything.

Some of them was that I made files unnecessarily.
I divided every function in different files, especially the functions related to clear session and set the location when user logout.
But I put it together in Auth. 

I also set up dev_1.1 of bigbluebutton development environment.
And I'm implementing to open confirmation box when user do double click "Leave session"

by Jaeeun(Anna) Cho ( at June 23, 2016 02:51 AM

June 15, 2016

Laily Ajellu

Adding Intractability to React


When an emoji is chosen, it must be set to the user dynamically. Because it needs to be dynamic, we need to hook into the React lifecycle so that whenever there’s a change it can be updated automatically.

My class structure:

Menu - generic
        MenuItem - generic
EmojiMenu extends Menu
        EmojiMenuItem extends MenuItem

Problem 1 - Whose State? :

Whose state should be changed?
EmojiMenu (parent) or EmojiMenuItem (child)?
Initially I thought it should be EmojiMenuItem’s state ( a bool, isChosen ) so that it can manage its own resources. To change its state from within the it’s parent’s method you need to use refs.

Solution 1 - It should be Parent’s State:

After reading this React Doc - More About Refs

“If you have not programmed several apps with React, your first inclination is usually going to be to try to use refs to "make things happen" in your app. If this is the case, take a moment and think more critically about where state should be owned in the component hierarchy. Often, it becomes clear that the proper place to "own" that state is at a higher level in the hierarchy . Placing the state there often eliminates any desire to use refs to "make things happen" – instead, the data flow will usually accomplish your goal.”

I realized that my initial choice was probably the wrong one. I could easily set the state for the parent ( theChosenEmoji : “nameOfChosenEmoji” ), simplifying the code.

Problem 2 and Solution 2 - Use refs this time:

Now I also wanted to add the attribute aria-checked to each EmojiMenuItem for accessiblity. In this case, it was clear that refs need to be used because aria-checked is an attribute EmojiMenuItem tag.

In my render() function:

In my click handler:

by Laily Ajellu ( at June 15, 2016 08:35 PM

Matthew Marangoni

Settings Accessibility Features

I've been studying the various accessibility requirements - ARIA-related and otherwise, and have begun implementing some of these features into the settings modal. The main accessibility features that must be included are simple keyboard navigation, tooltips, and descriptions for each element that is detectable by screen readers (this also requires modifying a few elements to include ARIA menu attributes).

Initially I had thought navigating the options sub-menu with arrow keys would be the easiest method, however I later realized that this would also require the user to have good vision thereby making it no longer accessible. Instead the entire menu can be now accessed via the Tab key. I am currently working on an issue where the tab key is not ignoring the background elements and hope to have that fixed soon (I have isolated this to an issue with circle buttons only). Another issue that will be addressed is that the sub-menu list cannot currently be clicked via spacebar (however the contents can).

Once the above is complete, I will be adding descriptions for each options element that will provide a clear understanding of the elements' function for users who require a screen reader. The implementation for this requires adding aria-describedby and aria-labelledby attributes, and positioning the descriptive elements off-screen so they are not visible (this seems messy, but it appears there is no better alternative). Additionally, I plan for each element to have a tooltip shown on-focus and on-hover so that it will be displayed to both mouse and keyboard users simultaneously.

by Matthew Marangoni ( at June 15, 2016 06:53 PM

June 14, 2016

Jaeeun Cho

Working with new computer at three weeks in CDOT.

Tuesday : I got a new desktop computer on last Tuesday from professor and started to assemble my new computer. I assembled my computer and tried to install windows but it was stuck in the first step of installation. I thought computer is very slow to install windows so it had some problems.

Wednesday : I assembled my computer with another parts of computer and tried to install windows again. Fortunately, it worked and checked the update whole day.

Thursday : My computer didn't finish to check windows update when I turned on the computer. So I waited to finish it.

Friday : I started to set up the BBB's development environment on my new computer.
I changed "sudo npm install grunt-cli"at package.json.
Unfortunately, it showed the error when I run "./" at bigbluebutton-html5 directory.

Error was ..

  "react" in /home/firstuser/dev/bigbluebutton/bigbluebutton-html5/imports/ui/components/user-list/chat-list-item/component.jsx (web.browser)
  "load-grunt-tasks" in /home/firstuser/dev/bigbluebutton/bigbluebutton-html5/Gruntfile.js (web.browser)
  "react-dom" in /home/firstuser/dev/bigbluebutton/bigbluebutton-html5/imports/ui/components/chat/message-list/component.jsx (web.browser)
  "react-router" in /home/firstuser/dev/bigbluebutton/bigbluebutton-html5/imports/ui/components/user-list/chat-list-item/component.jsx (web.browser)
  "history" in /home/firstuser/dev/bigbluebutton/bigbluebutton-html5/imports/startup/client/routes.js (web.browser)
  "classnames" in /home/firstuser/dev/bigbluebutton/bigbluebutton-html5/imports/ui/components/user-list/chat-list-item/component.jsx (web.browser)
  "underscore" in /home/firstuser/dev/bigbluebutton/bigbluebutton-html5/imports/ui/components/button/component.jsx (web.browser)
  "react-intl" in /home/firstuser/dev/bigbluebutton/bigbluebutton-html5/client/main.jsx (web.browser)
  "react-addons-css-transition-group" in /home/firstuser/dev/bigbluebutton/bigbluebutton-html5/imports/ui/components/whiteboard/default-content/component.jsx (web.browser)
  "react-modal" in /home/firstuser/dev/bigbluebutton/bigbluebutton-html5/imports/ui/components/modals/settings/submenus/SessionMenu.jsx (web.browser)
  "react-autosize-textarea" in /home/firstuser/dev/bigbluebutton/bigbluebutton-html5/imports/ui/components/chat/message-form/component.jsx (web.browser)
  "classnames/bind" in /home/firstuser/dev/bigbluebutton/bigbluebutton-html5/imports/ui/components/user-list/user-list-item/component.jsx (web.browser)
  "react-intl/locale-data/en" in /home/firstuser/dev/bigbluebutton/bigbluebutton-html5/client/main.jsx (web.browser)
  "react-intl/locale-data/es" in /home/firstuser/dev/bigbluebutton/bigbluebutton-html5/client/main.jsx (web.browser)
  "react-intl/locale-data/pt" in /home/firstuser/dev/bigbluebutton/bigbluebutton-html5/client/main.jsx (web.browser)

I just tried to clone my git repository again and my console showed another error when I run "npm install" according to the instruction.

Error was ..
  npm ERR! Error: EACCES, mkdir '/home/firstuser/tmp/npm-112557-pCIYTMRR'
  npm ERR!  { [Error: EACCES, mkdir '/home/firstuser/tmp/npm-112557-pCIYTMRR']
  npm ERR!   errno: 3,
  npm ERR!   code: 'EACCES',
  npm ERR!   path: '/home/firstuser/tmp/npm-112557-pCIYTMRR' }
  npm ERR!
  npm ERR! Please try running this command again as root/Administrator.
  npm ERR! System Linux 4.2.0-27-generic
  npm ERR! command "/usr/bin/nodejs" "/usr/bin/npm" "install"
  npm ERR! cwd /home/firstuser/dev/bigbluebutton/bigbluebutton-html5
  npm ERR! node -v v0.10.25
  npm ERR! npm -v 1.3.10
  npm ERR! path /home/firstuser/tmp/npm-112557-pCIYTMRR
  npm ERR! code EACCES
  npm ERR! errno 3
  npm ERR! stack Error: EACCES, mkdir '/home/firstuser/tmp/npm-112557-pCIYTMRR'
I tried to set up on my laptop on Sunday.
My laptop also had the same problem so I googled the solution.

Finally, I found the solution.
"npm install" was not correct.
"sudo npm install" is correct.

What a stupid I am!! OMG!!

by Jaeeun(Anna) Cho ( at June 14, 2016 03:14 PM

softlock problem in VMware

When I turned on the VMware, it showed "soft lockup - CPU#1 stuck for 23s"

I tried to forced shutdown and then VMware seemed to work correctly.
Unfortunately, my VMware could not find an ip for my remote server.

I tried

sudo service networking restart
sudo /etc/init.d/network restart
sudo service network-manager restart

sudo ifdown eth0 && sudo ifup eth0

But this command showed "No DHCPOFFERS received." and "No working leases in persistent database - sleeping."

Finally I found the solution.
apt-get -o Acquire::ForceIPv4=true update

by Jaeeun(Anna) Cho ( at June 14, 2016 03:09 PM

SVG and Canvas in HTML5

  • It stands for Scalable Vector Graphics.
  • It is used to define graphics for the web.
  • It is also define graphics of xml format so users can use text editor after create the svg images.
  • It is built into the document using elements, attributes and styles.
  • While SVG can be delivered as a standalone file, the initial focus is on its natural integration with HTML.
  • Users can use text editor for svg.
<svg height="100" width="100">
  <circle cx="50" cy="50" r="40" stroke="black" stroke-width="3" fill="red" />

    The <height> and <width> attributes define the height and width of  the <svg> element.
    The <circle> element is to draw a circle.
    The <cx> and <cy> attributes are the x and y axis of the center of the circle.
    The <r> attribute is radius of the circle.
    The <stroke> is the color of the circle line and the <stroke-width> is the border of the line.
    The <fill> is the color of the circle.

    • It is used to draw graphic.
    • It present bitmap images with JavaScript. (pixel based)
    • It is introduced by Apple for Safari, and other graphical widgets.
    • The images in canvas are deleted after rendering to the browser. If the image is changed,  a user needs to re-invoke the image to draw a entire scene.

    <canvas id="myCanvas" width="200" height="100" style="border:1px solid black;"> </canvas>
    var c = document.getElementById("myCanvas");
    var ctx = c.getContext("2d");

    by Jaeeun(Anna) Cho ( at June 14, 2016 03:08 PM