Our Difference

Our founder is a developer with 20 years of programming experience. He created this platform because of a void in the marketplace for high quality code assessments.

After doing hiring for many years and teaching at a coding bootcamp (where students where tested weekly) it became apparent that a better tool could be built.

We believe our platform is different and better...

Our goals are to create high quality tests that are modern, fair, and effective. We believe that our platform creates just the right setting to allow our test authors to achieve those goals. Each author is a veteran programmer with approximately a decade or more of experience. They are community leaders and experts in the specific testing area we've asked them to create. All of our authors are hand chosen by our founder who's also an experienced developer.

Quality

How are multiple-choice tests better?

Our success resides in the fact that we test for real coding knowledge over algorithmic knowledge, we believe in testing for concepts over terminology, and we also believe in providing analytics that depict one's knowledge instead of a simple pass/fail system.

At face value, multiple choice seems pretty basic. However, Questionable is very unique in that we have technology that allows our authors to write real code in our testing content. We haven't found any competitors that are doing the rich set of features we're doing in this area. We've spent a large amount of time making high quality features for our test authors and the investment has paid off because as a result, we've yielded high quality testing content.

Knowledge over Algorithms

Coders dislike whiteboard algorithm challenges because this way of evaluating almost never depicts the coder's actual skills for real-life software engineering.

Other assessment apps proudly strive to make a digital version of whiteboard coding challenges. They do this by asking the coder to write an algorithm to solve a problem. Then they "grade" the submitted code with a something called a "unit test" which is a term for code that reads code.

Aside from the point that algorithm challenges are ineffective to begin with, unit tests are only capable of looking at the end-result of the code. The only thing a unit test can evaluate is pass or fail without any regard for quality, best practices, modern technique, or even the type of strategy used.

If that's not bad enough, there's only a small portion of a developer's coding skills that is even capable of being evaluated using this form of testing. With coding tests that are written in multiple choice though, any piece of knowledge can be tested.

Knowledge

Concepts over Terminology

How can multiple-choice coding tests be better?

As with any testing platform, the quality of the test largely resides in the hands of the question authors. However, the platform will set the author up for success or failure. We've found that when given a multiple-choice platform that does not allow the author to write code in their questions, the author will tend to write lots of terminology-oriented questions.

Terminology-heavy tests don't effectively assess whether a programmer has any skills. While some programmers have an academic background, this industry changes quickly and all programmers learn on their own and sometimes the terms are illusive even though the programmer understands the concepts.

Our platform is different because our tools not only allow but encourage authors to write code in their questions. This allows our authors to create high-quality questions with the expectation that the test taker will read and understand code in order to succeed, just like in their real job.

Getting the specifics

Since our questions are multi-choice with real code in them, we can get very specific with the types of knowledge we want to assess. Our questions are "tagged" to indicate which subjects are being tested for in the question. Because of this, we can generate a report about each test taker regarding which subjects they were able to grasp the most and which ones they showed gaps in their knowledge. We call this the Gap Analysis report and it's available on every professional test we provide.

Specifics

Confident Results

Unfortunately, cheating is always a main concern when evaluating skills with a test. We do all that we can to ensure that test takers are honest. While testing, content cannot be copied for easy searches. Also, given the nature of non-terminology questions that are code-centric, it would be almost impossible to "search" for correct answers to begin with. We also offer settings that can be adjusted such as password-protected tests, webcam monitoring, time limits, and settings to prevent takers from taking the test twice.

Confidence