Bartosz Michalik
Well... it depends

Well... it depends

Mocking the Open API Specifications

Photo by Possessed Photography on Unsplash

Mocking the Open API Specifications

The easy way

Bartosz Michalik's photo
Bartosz Michalik
Β·Jun 8, 2022Β·

10 min read

Table of contents

  • Introduction
  • TL;DR
  • Details

Introduction

The best way to understand the mechanics of a thing is to start interacting with it. Skimming through the documentation won’t hurt as well πŸ˜‰. APIs are no exceptions to this approach. Nowadays the industry-standard documentation of an API should be always up to date and easy to understand/follow. In the realm of the REST API, we use specification languages like OAS or RAML. They defines our API contracts between interacting parties, can be transformed into elegant and interactive documentation, and help to bootstrap the client (or server) side development process. When the first version of contract is aggreged we would most likely battle-test it. How can we do that? Well, we can expose our production system or a version of it... unless we cannot πŸ˜‰ or do not want to. Using a production platform for testing purposes might be costly or impossible – especially when a production platform does not exist. In such situations, API mocks come to the rescue.

An API mock is a system that can provide example-based or synthetic responses to incoming HTTP requests. So I hope that with a magic spell I will be able to turn my precious API specification into a mock you can interact with. Is that even possible?

from giphy

There are numerous options, and in this post, I will focus on a few of them. The list is based on openapi.tools/#mock with the addition of a few tools I found useful in previous projects. I have selected tools with out-of-the-box support for OAS and are free of charge to use. In addition, I wanted to focus on solutions that do not require programming skills or advanced configuration. Such an approach comes with limitations, data is either static or looks fake. But hey, it is a start.

I have (mostly) focused on tools that require just a docker run... spell to bootstrap a mock server from an API specification. Which one to us? Can any of them be a viable solution for you? Well, it depends …

TL;DR

The table below compares all tools I have examined.

image.png

If you do not know what you are looking for go for Prism or Postman. Prism offers a lot of out-of-the-box features like support for synthetic and example-based payloads, request and response body validation, and multi-file-based OAS specifications. Postman is a popular API platform that many of you are familiar with already. Its mocking feature allows for using stored API payloads and supports synthetic payload generation.

Both seem to be viable options for developers. But, they might not be sufficient if you require more sophisticated interaction with your mock. There is no such thing as a free lunch. Thus you either need to accept limitations, invest your time and effort in something crafted to your needs, or find a paid solution.

Details

I have prepared a simple API specification (check my playground project github.com/bartoszm/api-mocking-playground ) I am using as a running example in this text. This is a version of Pet API that supports polymorphic payloads (using oneOf), attribute validations, and instance examples. There are, however, three flavors of the API: multi-file, with externalized examples (-full-model.yaml), and single-file (-bundled.yaml). The reason for that is simple - most of these tools cannot resolve references to external resources.

API Sprout

This is a tool written in Go with the smallest footprint from reviewed options.

Running it is as simple as:

docker run -it --rm -v ${pwd}:/tmp -p 8000:8000 \ 
danielgtaylor/apisprout "/tmp/petstore-bundled.yaml"

An example-based or generated responses are used. Moreover, you can indicate which example you want to use using the Prefer header like this:

curl --location --request GET 'http://localhost:8000/pets' \
--header 'Prefer: example=allCats'

The tool supports both examples and example definitions which is a nice surprise. The synthetic responses are very basic, which is not.

API Sprout does not support $ref which means that you need to use a single file with your OAS. If it happens that you have external $ref in your model mock won't start and stop with an error.

Validation of request or response payloads is not supported.

FakeIt

The next mock server is written in Ruby. I was not able to use the latest docker image as it reported errors with my pet example. Version 0.9.2 runs with no problem:

docker run -it --rm -v ${pwd}:/tmp -p 8000:8080 realfengjia/fakeit:0.9.2  \ 
--use-example --spec "/tmp/petstore-bundled.yaml"

There is a choice to use provided examples or synthesize them. Again, you can use both, examples and example. Unfortunately, to use examples from OAS they must be embedded in a single resource file as Fakeit does not deal with external $ref and you will only learn by interacting with this server because it will bootstrap anyways. What is missing is the ability to indicate the example of interest. Maybe because the response might be not exactly an example but a value that is composed of a few of them.

Synthesized examples look more natural than in the case of API Sprout and each time you got a different instance - looking into the code you can learn that Faker library is used to generate values for some string formats. What is strange each time the same value for pattern is used and you can change that only by restarting the server. A bug or a feature?

Request payload validation against the schema is a feature worth mentioning, but if you don't like it you can switch it off. Especially, if you use required read-only attributes - readOnlyis not a feature supported by this tool.

The start configuration can be modified on the running instance using config endpoint. I have noticed that Fakeit is less forgiving than the previous option - I had to add type definition to my enums to have the value synthesized correctly. Runtime footprint is among the highest in the pack.

Mockintosh

Python-based Mockintosh has a very nice UI, sophisticated response configuration mechanisms, templates, and more. Unfortunately, as a tool to quickly run OAS-based mock, it is rather useless.

On the bright side you can run a multi-file version as Mockintosh can resolve the external references:

docker run -it -p 8000-8005:8000-8005 -v ${pwd}:/tmp up9inc/mockintosh /tmp/petstore.yaml

But before I was able to run it I had to fix a number of strict validation checks that were ignored by the other tools.

Although your examples need to be correctly defined they are ignored and the server offers empty responses. There is also a problem with server configuration when it comes to the interpretation of the default response. As you could probably guess it does not perform any request or response content validation against the schema.

I was quite surprised by the results so I spend some time digging in. It turns out that support for swagger (2.0) specification is way more robust.

The OAS support is claimed experimental and I hope that it will improve with time.

Mock Server

Mock server seems to have more to offer for users who want to invest in configuration. Nevertheless, this Java-based tool offers better support for OAS than Mockintosh. But it requires two steps. First, you need to start the server:

docker run -it --rm -v ${pwd}:/tmp -p 8000:1080 mockserver/mockserver

Next, you need to feed it with your OAS

curl --location --request PUT 'http://localhost:8000/mockserver/openapi' \
--header 'Content-Type: application/json' \
--data-raw '{
        "specUrlOrPayload": "file:/tmp/petstore-bundled.yaml"
}'

The key to mock server is the concept of an expectation which defines an action to be taken in response matched request. An expectation is built for each response that is defined in the Open API specification. If there is an example defined it will be used as an expectation response body, yet all but the first examples are ignored.

Expectations built from OAS do not support matching by header or query parameter nor enforce payload validation. Moreover, references to external examples are not resolved.

If you are interested how your OAS was translated into mock server configuration you can retrieve configuration via API or go to the dashboard UI (at localhost:8000/mockserver/dashboard).

OpenAPI mocker

Open API mocker is JavaScript project running on nodejs. I am using it with docker like this:

docker run -it --rm -v ${pwd}:/tmp -p "8000:5000" jormaechea/open-api-mocker \
 -s "/tmp/petstore-bundled.yaml"

The tool uses basePath from the OAS definitions so you need to take that into account when interacting with the mock.

This mocker supports both example definition methods and synthesized examples that can be improved using x-faker statements. Some regular JSON schema You can specify an example you want to use with a request header:

curl --location --request GET 'http://localhost:8000/v1/pets' \
--header 'Prefer: example=allCats'

If you don't do so the tool will try to use path-level examples, then type-level examples, and uses synthesized responses only when neither of them can be found. The nice thing about this mock server, however, is the fact that you got a consistent response for a given path.

$ref are somehow in use but it is more confusing than helpful. So the tool can start using multi-file definitions. Yet, externally referenced examples are not resolved. When synthesizing responses external schemas are ignored. Moreover, when the external schema is describing a request payload tool responds with an error. This is the only notion of the payload validation I have observed. However, looking into code there are npm libraries responsible for reference resolution (json-ref) and schema-based validation (ajv) so perhaps I am not using this tool correctly.

Prism

Prism is also written in TypeScript. It runs as node module, but you can start it from docker as well:

docker run -it --rm -v ${pwd}:/tmp -p 8000:4010 stoplight/prism mock \
-h 0.0.0.0 "/tmp/petstore.yaml"

It supports examples and synthesized responses. To select an example you can use header (Prefer: example=allCats) or query parameter:

curl --location --request GET 'http://localhost:8000/pets?example=allDogs'

Synthesized examples are very dynamic and the tool uses a variety of JSON schema constraints when producing values. You can also add x-faker definitions to attributes to make the end result even nicer.

Prism deals with $ref smoothly so you can keep your definitions and examples in multiple files.

Validation works for both request and response payloads. I have found the latter very useful as it can help me to validate whether my examples evolve along with the API changes. There is one validation-related problem I have recorded so far - Prism does not support readOnly control statements.

Postman

Postman is a tool (and the whole platform) that you might be familiar with already. It is not strictly a mock server that uses OAS definitions, but you can use it in such context. Firstly, it supports OAS imports from which you can generate your collections. Secondly, you can use it as a SaaS-based mock server that mimics a real server using examples you have stored in that collection. These examples can be handcrafted or recorded during interaction with the real implementation. You can even adjust them to introduce some placeholders and to achieve a more dynamic feel for clients interacting with the mock.

Unfortunately, Postman does not deal with payload validation (at least using free plan).

Postman team has described the mock configuration process here so if you are interested enjoy the reading.

Summary

There is no one size fits all solution for mocking. Nor you can use a given OAS and expect the same result with the presented tools. Most of the tools will ignore external $refs, some will respect JSON schema constraints and a few will use additional control extensions for faker. However, I do believe that each of you can find a mocking solution that is good enough in your context. If not, there are other tools that claim more advanced features in exchange for money πŸ˜‰.

Prism looks very good in most of the aspects that are important to me. In fact, I am using it from time to time if I need a running strawman of the APIs we are building. Unfortunately, there is no easy way to inject more sophisticated behaviors (using request parameters in the response) if I need that. For such use cases tools that did not shine in this post might be a better match (e.g. Mock Server). In addition, there are tools I have skipped in this post that are designed exactly for that. So if you have a bit of time and programming skills you can build a mock that responds almost with production-like payloads. Hopefully, soon I will be able to compare and contrast them for you. So stay tuned.

Β 
Share this