I answered a question on StackOverflow recently about creating a solution to dynamically clear out certain environment variables, that had been set during a collection run. It was something that I had done before in a manual way, by adding hard-coded string values to an array and then iterating through the list to unset each one.
I wasn’t aware of how to do this dynamically so I thought it was a great opportunity to learn something new. Whenever I’m trying to do anything in the `Tests` or `Pre-Request Script` Tabs, I will always take a quick look at the Postman Sandbox API reference page to see if there was anything that I can use – The `pm.environment.toObject()` method popped out at me, this was something that would give me the dynamic element I needed so I wouldn’t have to hard code any values within an array.
I’m using _.keys() to get a list of all the keys within the `pm.environment.toObject()` object and then using _.each() to iterate through these. To unset the variables with the “demo” prefix, I’ve added an `if` statement and used the `startsWith()` method to grab the ones I want.
For demo purposes, I’ve manually added these variables into an environment file to demonstrate what the script is doing. In a more realistic workflow, these variables would have been created during a collection run, using the `pm.environment.set()` function.
I have a mixture of variables here, some using the “demo” prefix and some without. It’s the ones with the prefix that we will be clearing out, after the request has been made.
Before the request is sent, the environment variables can seen using the environment quick look feature.
The end result of running the script, which will run after the request has been made, is that it clears out the environment variables that start with the “demo” prefix. This prefix could be changed to match one that you may use in your collections.
I’ve added a `console.log(arrItem)` statement to the code to show in the image, the keys that have been iterated through, while the script was running. When the ‘key’ matches the `if` statement condition, it’s placed into the `pm.environment.unset()` function and removed.
The code snippet can be found at the link below, please feel free to use and modify it to suit your needs. As I love to learn and my JS knowledge is still at a novice level, I’ll be happy for someone to make the code more efficient.
After using an application like Postman for a while, I’m always still pleasantly surprised when I stumble upon particular features that I never knew were included in the native application. Postman comes with many cool features out of the box and one that’s included is an awesome ‘time saver’.
I’m always creating little Node.js applications and helper tools to assisting my testing, to help produce things like a ton of test data, far quicker than I can manually. Some of this data includes dates and times in all different kinds of formats – I make use of the moment.js library, this is just an awesome utility module that reduces the pain of working with anything time/date related.
I was very pleased several months ago, when reading through Postman’s documentation, to discover that this awesome module comes built-in with the native client! Win!
How to start using moment within Postman?
If you’re familiar with Node.js and the way that you reference external modules in your scripts, you’re basically halfway there…If not, don’t worry its super simple – all we need to do is write the following line in the Pre-Request Script or Tests tab, depending on where you would like to use it.
Postman already knows what the moment module is so we don’t have to install this and save it anywhere, we are just basically telling the application that we’d like to make use of this within our test script. Now that we have made the reference to the module, we can use the ‘moment‘ variable to access all the awesome features!!
I’m going to show you a few ways that you can use this within your requests to give you a flavor of what you could do with it and once you’re comfortable with the syntax, you can start to explore the documentation a little bit more and find some new cool ways to start making use of this in your own context.
To be honest, if all you’re after is an ISO date format, there is no real benefit bringing in the external module – This example would create the same time object using either the native JS or the moment way.
Where I feel moment brings value is when you want to add some formatting to the time object to suit a specific endpoint when POSTing data or you need to add a start or end time to a URL parameter filter etc. The way that moment chains the different functions together makes it easier, as a human, to read the syntax and have an instant understanding about what it’s actually doing. I don’t personally feel that you get this when using the native JS syntax.
Formatting the time object is very simple using moment – There are lots of different options available to use, a full list can be found here. I’ll show you a couple of options below:
This shows a selection of different formats that can be easily created – There are far too many combinations to show you here but their should be something in their to suit your needs, when making requests in Postman.
I mentioned that we might want to add some time values to certain URL parameters, for our requests – The image below shows this can be done using the ‘add‘ and ‘subtract‘ functions, this is all chained together to make things easy to read.
This is shows a time value created in 3 different ways – 10 minutes in the past, now and finally 10 minutes in the future. I’m just using ‘minutes‘ in this example but this could be seconds, hours, days, weeks, years etc.
All the basic examples have just logged these values out to the Postman Console, let’s quickly look at how we could use this on the requests that we are making. The best way is to store these as either an environment or a global variable – Once saved, you will be able to reference this value and use this within the different parts of your request.
This example creates a Global variable with the moment ISO date time as the value – This was created after the request so it’s not that useful but if we were to add this to a Pre-Request Script, which executes just before the main request, we could reference this value in a POST request body.
The same method of creating variables, this time an environment one, could be used in the Pre-Request Script to create some dynamic URL parameter filters.
This example would create the variables before the request is sent and it will use these 2 time object values in the URL parameters which would, in theory, give you a 1 hour time window.
These are very basic use cases but hopefully this will give you an idea of how and where you could use the moment module in Postman for all of your time based needs. If there is anything that you’re trying to achieve and you’re still unsure of how this all pins together – Please feel free to add a comment or you could reach me @dannydainton on Twitter. I’m always happy to help out.
I’m still currently adding different Postman related content to this GitHub repo, hopefully some of this infomation is useful to you. It’s an ongoing project so it will never be a ‘finished’ resource.
Last Friday, I fulfilled one of my own personal goals by giving my first ever talk at a Conference. This wasn’t just any conference, this was at the place where it all started for me – This was the conference that ignited my passion and my love for Software Testing nearly 5 years ago.
To me, TestBash Brighton feels like I’m going home to see my family, the place holds so many great memories and I’ve met so many great friends there that, in their own way, have helped me during my testing journey.
My ‘talk’ was basically a story of my life – How I failed big time during my school years, discovered my love for learning in the British Army and how this was amplified to new levels when I found the wonderful world of Software Testing. The slides and my speakers notes can be found here, if you would like to take a look.
As this was my first ever talk at a conference (I didn’t even practice it anywhere, other than in my house) I had a massive fear of the unknown – I knew that people wanted me to succeed and not fall flat on my face but that doesn’t really tell my brain to stop panicking about it all.
This photo was taken by Matthew Parker when I arrived at the venue, although I have a smile on my face, that might be from the ‘grumpy’ massage I was getting from Patrick Prill, I was in fact having an internal meltdown. It’s really strange, I’ve been to places like Iraq and Afghanistan but nothing compares to the anxiety that you feel when you know that you’ll be up on that stage, with multiple eyes on you for 30 minutes.
I got a huge case of the ‘next in line effect‘ while I was sat there listening to Emily Webber‘s talk – I could hear her talking but the whole time I was going over in my head what I was going to say during my talk. As the clock ticked closer to 1000, I literally couldn’t remember any of my talk, I kept opening my laptop to read my notes again trying to make it stick in my mind.
I had opted to go for a slide deck full of images so if I couldn’t recall the words, I was screwed because I had nothing to read. Thankfully, I could see my notes on the laptop in front of me during the talk so I could roughly see what I had written and this jogged my memory.
Before the talk, I told myself to just focus on one or two people to help me relax but that went out the window in the first few seconds and for about 30 minutes, I don’t think I looked at a single person directly in the eyes – That’s what panic and fear does to you, you can plan to do something in advance but ‘no plan survives first contact with the enemy’.
I managed to get through it in the end with my credibility hopefully intact and was absolutely blown away by the overwhelming kind words I received from different people. It means a lot to me that people enjoyed the story and found little bits of it that they could relate too.
A huge THANK YOU needs to go to three amazing ladies that helped me shape my shell of a talk into something that I was proud to share with everyone in the room.
My absolute hero Rosie Sherry who has helped me so much over the years that I’m sure she’s sick of me mentioned her name now. Gwen Diagram who is just an absolute ball of energy and made some killer suggestions and helped me see the light and change areas of my talk that basically looked a bit crap. Finally the incredible Deborah Lee, her constant support, encouragement and just being there for me, is something that I will never forget.
A special mention needs to go out to the people who sent me private messages of support before my talk – Thank you all so much! Hopefully I did you all proud.
Once again, the magical wonderfulness of Testbash gave me the same feeling that I always have when I leave Brighton and hopefully will continue to do so forever.
I’m obsessed….I can freely admit that and be perfectly comfortable with saying it! The object of my unhealthy obsession is Postman – If you know me and have been following any of my work lately, you’d know that for sure. I’m always talking about how awesome it is as a tool and I’m also creating free content in a public Github repo to help others learn more about the tool and all the different wonderful ways to use it.
So I’ve established that I’m into Postman in a big way – I’m always looking to help people with any questions they may have with using the application, no question is too small. The trouble that I’ve found is that very few people actually approach me, which is totally fine but because I’m naturally a helpful person….in a totally weird way I would love to have a ton of problems to try to get my head around. I love challenging myself and knowing where my limits are, I’m still learning as I go so it’s great to just evaluate where I’m currently at with my knowledge.
Last month, as I was researching for a new Postman example that I was writing, I was stuck on a particular problem and like many people in that situation, I turned to Google. When the results of my search came back I was surprised to see lots of links to Stackoverflow – thinking about it now it seems perfectly reasonable, It’s a tool that has been used by millions of people in the world and has now been around for several years…people were bound to have questions about how to do certain things.
Just a bit of background about my previous encounters of Stackoverflow – I’m always tinkering with different applications or different programming languages so when I’ve searched online for help with a problem, that site has been the main source of my information. It’s probably the go-to place when you have a development type problem to solve. I’ve asked a couple of questions on there in the past and got an answer extremely quickly…It saved me days of banging my head against the wall!!
Getting back to Postman…I started to use Stackoverflow’s search feature with the ‘postman’ and ‘postman-collection-runner’ tags applied – this brought back a whole host of questions that I could instantly answer, some new and some old. Yay! I had a new outlet for my obsession! Postman is a relatively niche topic on the site, it’s referenced a lot because people will use it while developing and testing API’s or Web Services so it will be mentioned in thousands of questions but as a topic, there has only been ~2500 questions tagged.
The whole Stackoverflow site is built on a model of reputation, the more questions you answer the more reputation points you get – You can also get points for many other things like up-votes, editing posts etc. It gamifies the whole process and as well as wanting to help others, you also want to build up your reputation and probably your personal credibility on the site. As I was a new user I had a score of about 10 I think, I got these points from the 2 questions that I asked a couple of years ago. I wanted to set myself a target of getting up to 500 points – I thought that was quite reasonable for someone just answering questions about a single tool….I didn’t expect to learn as much from helping people, as I did, in that short amount of time.
The very first problem that I faced as a new user to the site was that because I had a reputation of under 50, I wasn’t allowed to comment on any of the questions – Why was this such a big problem? Think about the worst bug report you’ve ever seen…Something so vague, void of details, impossible for you to reproduce given the information and just basically a load of crap. That’s the level of some of the questions asked by users on the site, seeking an answer to a technical problem…The ability to comment gives you a place to seek clarification and to tease out more details but you can’t even do that until you’ve gained enough points to be able to do it – Which absolutely sucked!!
Thankfully, I answered a few basic questions and got some points on the board so I could then extract more information via the comments section so that I could actually help people. Over the course of about a month, I’d done myself proud – I’d answered a bunch of different questions of various degrees of difficulty, using the same method as I have been doing when explaining the different Postman features in my Github examples and in turn I’ve helped many people but above all, learnt a bunch of new stuff along the way.
I didn’t manage to reach my 500 point target but I got pretty bloody close!! I’m still checking in on the site but I’m going step back a bit from it now and concentrate on my upcoming Testbash talk in Brighton.
I continued to answer questions that other people have asked on the Stackoverflow site and I’ve just broke through the 2000 point mark…very proud! I think i’m going to take a step back now and concentrate on something else – As it stands, I answered 119 questions so i’m very happy that I could help that many different people.
This month I started a mini project on Github to create a small knowledge base, all around the REST Client tool Postman. I’ve been using this tool for a while now and I’m a massive fan, I want to share some of the knowledge that I have gained, with other people.
It’s basically a list of examples that use the tool and its many cool features to interact with a public API. This wonderful resource has been created by Mark Winteringham. Mark has created Restful-Booker, a safe place for people to learn more about API testing and an active platform to try out tools like Postman.
In one of the examples, I explain how to the use Manage Environments feature. This will allow you to create an environment file, then assign pieces of data to variables. That data can then be referenced in any of your Requests within a Collection. This is very handy during the creation of an API, where you may have different environments to test the API like development, staging, pre-production etc. The routes of the API will generally stay the same but the baseURL will change depending on the environment location. Check out the example to learn more about this in more detail.
So why write a separate blog post?
I’m always fully open to learning new things and sometimes you stumble across things by accident or as the result of looking into something else. I love creating visual helpers when I’m trying to explain something – “A picture paints a thousand words”.
To fully explain what I meant by the different environments in the ramblings above, I thought I would just create a couple of super basic Nodejs Express APIs locally and then edit the Hostfile on my local machine to override the DNS for the localhost domain, so that I could show requests being made to dev-restful-booker and staging-restful-booker in Postman.
The code for each API is crazy simple, I wanted to mimic the actual Restful-Booker API so I added the Content-Type header and also made it return a 201 Created status code. The only real difference between my mock dev and staging APIs was the .send() value and the port number that is was running on.
So I added the names to my local Hostfile and started the APIs…that’s when I hit my problem…I wasn’t aware, until I tried it out, that you couldn’t use the IP + port number as an entry in the Hostfile. Using the IP on its own is fine but it didn’t like me adding the port number too! 😦
This meant that my amazing idea of showing this in action on Postman was ruined….or was it?! I headed to Google, that’s what we all do right – Thankfully, It didn’t take long till I found a comment on Stackoverflow, that mentioned that you could just use the HOSTS feature on Fiddler to do this instead. Fiddler is a free web debugging proxy tool and is an absolute must have for Developers and Testers working in the web development space. I use it all the time but because I never had a need to do it, I wasn’t even aware that you could do that within the tool.
To access this feature is simple. In Fiddler, select the Tools menu option and then select the HOSTS… option at the bottom of the list. I added the following entries and hit Save.
I spun up my two mock node APIs and bingo!! We were back in business!!
So that was it, Fiddler saved the day and I was able to add what I needed to the Postman example and learnt something new in the process! Win – Win!!
Please do check out my examples on Github if you’re interested in learning more about different ways that you can use Postman.
I absolutely love Testability as a topic and to my delight, Ash Winter wrote a great post recently where he posed a series of questions – Naturally, I jumped all over these to see how my team measures up against these questions. In fact, I was a little too quick to react and came up with a score that wasn’t actually on the page (Blame the typo on Ash, not my poor reading skills).
I wanted to write this post to try and explain where I feel my team is at right now and give my answers to the questions. Some people may feel that I haven’t really answered the question correctly but I can only answer them based on my current team and in our current context.
1 – If you ask the team to change their codebase do they react positively?
For me, that all depends on what the change is you’re being asked to make and who is requesting the change. No one in the team is massively nervous about making changes to the codebase but the change has to be something that we, as a team, all agree on and believe it to be the right thing to push us forward. With any change, big or small, we will always plan it out together as a full team and walk through the changes to assess any risk that this may introduce, based on the current information and our understanding of our services. If there are gaps in the information during the walk through, we will spike ahead and try to uncover the answers to the known unknowns.
2 – Does each member of the team have access to the system source control?
I believe this is a fundamental part of being a member of a development team. In our team, as it stands today, only the Product Owner doesn’t currently have access. That’s not to say that he doesn’t because he’s not allowed, he just feels that he doesn’t really need it. Everyone apart from him has checked in “something” to source control over the last 18 months. This could be production code, a utility script or a testing tool, a JSON template for one of our monitoring Grafana dashboards etc.
3 – Does the team know which parts of the codebase are subject to the most change?
Every member of the team has been on the project from Day 1, the collaboration and communication are something that I feel we excel in. Everyone is capable of working on any aspect of the project, we identified certain silos forming early on and made changes to spread the knowledge amongst the whole team so we don’t have a single point of failure. People will always be stronger in some areas but we all have a collective understanding of the changes being made at all times. We do a 3 Amigos on every story before starting the work and as we are quite a small team this generally involves every team member so everyone knows what changes are being made to what area.
4 – Does the team collaborate regularly with teams that maintain their dependencies?
Our API was released as a pilot a few months ago to a selected amount of customers who are giving us valuable feedback, as the project matures the data available to them is becoming a lot richer, which is of benefit to the end user, we are always constantly aware that our changes are not breaking ones as the API is being used to create custom integrations. Over the last few weeks, we have picked up some internal consumers that are creating an integration based on the data we provide. This has been very collaborative and it’s awesome from a testing perspective as we get super quick feedback and can react to this and make changes where necessary.
5 – Does the team have regular contact with the users of the system?
Before the internal consumers started their integration we were only dealing with 1 or 2 main external contacts who were providing us with feedback. For me, this could have been better or more structured. The customer was based in the States so there were time zone issues to address and the general information passed back to us was hard to decipher and not clearly presented so it just ended up being a bit too much back and forth. It’s a lot better now but it could always be improved. At the end of the day, we’re creating software for our customers/end users so they should be involved as much as possible.
6 – Can you set your system into a given state to repeat a test?
This is one that I’m extremely happy with, as a tester, having the ability to control the system and put it into any state you want and at any point, is a priority. We have built Testability in from the start, there are hooks in place to stop a specific service, we can easily spin up or tear down services to get to a set point. We have also created a suite of tools to allow us to achieve many things, one favourite of mine is a tool that grabs all the events from any live interaction and replays these in a local environment as if they were happening in real time. This can be done in seconds and helps with debugging issues to get to a solution quicker.
7 – Is each member of the team able to create a disposable test environment?
We use a Vagrant development environment which is just awesome, the instructions are clear and well laid out which allows anyone in the business to bring up their own environment within a few minutes. There are a few requirements for certain applications and languages to be installed but a script has been created to pull down all of these items, this is also attached to the set of instructions. Each service has a set of rake tasks, so building the C# .Net services or the Node.js API and starting them up is just a single line in a terminal. Our team has created an events Simulator that will mimic a live Contact Centre (The core service of NewVoiceMedia) and feed these events into the locally running services. At this point the endpoints are available and requests can be made to get the data. I’ve not timed the installation but it’s pretty bloody quick to go from nothing to seeing a response from the API.
8 – Is each member of the team able to run automated unit tests?
Of course! This goes hand in hand with being able to access the source control and spinning up your own local development environment. I would have serious concerns if this wasn’t the case.
9 – Can the team test both the synchronous and asynchronous parts of their system?
This is very closely related to question 6 in my opinion, knowing what control you have over the system in order to put it into a specific state. More important than being able to test these parts, you first of all need to know where and when they occur in your system.
10 – Does each member of the team have a method of consuming the application logs from Production?
We mainly use Papertrail (we also have other tools available) to aggregate the logs (All Staging, Pre-Prod and Production environments) I have set up groups within the tool to make it easier to see what is happening on all our boxes in one central place. I’m all for consistency and the group names are the same in each environment, apart from the environment identifier. Papertrail is great at what it does but I find the UI not amazing to look at so I use a Chrome extension called Stylish to present it in a way that works for me. It just manipulates the CSS to present the page in a way that you want, I tend to use colour to show specific pieces of information.
11 – Does the team know what value the 95th percentile of response times is for their system?
We have gone a bit mental in this area, the team loves monitoring and performance based metrics. We’ve invested a lot of time and effort so that we are answering questions about the impact of increased load and stress on the services, with data rather than a best guess finger in the air measurement. Not only do we cover performance but we have also exposed the state of the resources (CPU, memory and disk space, etc.) on each box. We have multiple different Grafana boards giving us constant real time feedback.
12 – Does the team curate a living knowledge base about the system it maintains?
At NVM we use Confluence from Atlassian, each team has its own space as well as the wider department spaces. Our team has built up a wealth of information about all aspects of the project. We did this from the start to document our journey but this has grown and improved as the platform has matured. It’s full of useful information about our architecture, environments, how-to guides, team culture, deployment and release information etc. We periodically curate the content to ensure that it’s still relevant and the space is just not full of dead/redundant information.
Hope my answers provide some information about how we function as a team and gives you an insight into things that we feel are important. For things that I would add, currently, we also deploy our services to multiple regions around the world so having a team based understanding of how that all pins together is very valuable. We are still learning and growing as a team, we started out on this project with the intention of having Testability baked in from Day 1 – I personally think we’re doing pretty well but we haven’t stopped yet so maybe I will have an updated list of answers in a few months.
Any questions you would like me to answer based on my responses, please leave a comment below! Cheers for reading!
Sometimes you do certain tasks that become normal and you kind of go into autopilot, blindly repeating the same thing over and over again. It normally takes someone else or something else to spark something in your mind and give you an idea that there is always a simpler solution.
Recently, I’ve been using a strategy when testing, that involves creating “Test Queues” within RabbitMQ (The message broker that we use for our microservices) that siphon all the messages from the queues that our microservices are consuming – The Test Queues hold/store them, so that I can investigate deeper into the message data. Unlike the actual queues, these do not have consumers so the messages will stay there until I choose to delete them, adding an extra layer of control. I could just manually stop the service and grab the messages from the service queues but I like to give myself a separate option.
For context – My Test Queues would live alongside the red ones in the image above but will not have consumers, the arrows to the right of the red queues…hopefully, that makes sense. 🙂
My boring repetitive problem…
RabbitMQ is an awesome message broker and like it says on the site “Messaging that just works” what they haven’t concentrated on is the usability of their UI management console, why should they, that’s not their main focus – they currently just have something that’s good for now. As much as having these queues is really useful, clearing them out or purging them is a tedious task, it’s made worse by the fact that I follow the same process several times a day…
Click on the queue name > scroll to the bottom of the page > Click on the “Purge” button > scroll back to the top of the page > Select the Queue tab to get back to the main view…repeat…yawn!
I’m a fan of getting a local instance of the technologies that we use within our feature team and exploring lots of different aspects so I feel more comfortable and more informed about the tools I’m working within. I knew the RabbitMQ has a HTTP Restful API with a limited amount of features, I can use some of these to my advantage!!
RabbitMQ has been around for about 10 years now so I had a hunch that someone has probably had the same problem as me and wanted to use the API to purge the messages within a queue…I headed over to Google…
Basically, the first result that came back was about 90% of the solution that I required…Bingo! I found this from 5 years ago – Like I said, it wasn’t exactly what I wanted so I needed to adapt the code to suit my requirement.
I only needed to refactor the code slightly and add an IF statement that checks to see if the queue name starts with the “TestQueue_” name if so, purge the queue. All the other queues are unaffected. Job Done!
During the testing of the code, I added some logging just to sanity check that I was getting the correct queues that I wanted and discarding all the other ones. All the “null” entries below are queues that do not start with “TestQueue_” so are ignored.
Once the Snippet was run, I could see that the correct requests are being made…Yay! The messages were deleted from only the queues I wanted.
So I now have a link on my browser that saves me the pain of going through the tedious repetitive task to purge each queue. It’s the simple things in life that make me happy.
Alan wrote a post to accompany his video that references lots of links about how others have used bookmarklets. One of those is this excellent blog post by Abby Bangser, which I read a while ago before I saw the YouTube video. She explains how she utilizes the bookmarklets to quickly fill in form data – It’s really interesting and I would recommend trying to give it a go yourself.
Hopefully, that has been interesting enough to spark something in your own mind and attempt to give this a go – I would love to hear from anyone who has created a bookmarklet to solve a problem.