Over the years working as a software engineer and now a product manager, I’ve encountered multiple situations where I needed to extract numerical data from a page on a periodic basis and create visualizations, typically line charts to help me see trends over time. For example, I wanted to extract product prices and monitor them over time. Or, I wanted to query a search engine periodically and extract the number of matches or the position of a specific page for SEO purposes. So I’d hack together scripts to fetch pages, parse them, store the extracted numerical data in a file, and then turn them into charts. Here are some more examples, some use cases that are all basically about this same need:
- Analyze your websites and APIs in pre-production, test, and live environments
- Extract numbers from services you offer or third-party services you use to analyze trends
- Track currency exchange rates, temperature information, mortgage rates, etc.
- Extract metrics from a JSON response API
- Extract metrics for compliance monitoring, quality assurance, and UX testing
- Extract and monitor metrics for SEO optimization
- Track your company’s rating on a review website
Then there are related use cases that are more about monitoring performance of web pages and websites:
- Extract additional performance or any other metrics exposed by browser APIs
- Track how long a certain user journey takes on your websites
- Analyze the performance of competitors’ websites
What all these use cases have in common is that they can all be handled using more or less the same approach, as they all need the same pieces of functionality:
- Something that runs periodically, like a cronjob
- A mechanism to fetch a web page or make a call to an HTTP API (aka REST API or JSON API)
- The ability to parse the response to such requests and extract numerical data from it
- Charting and dashboarding capability to turn the collected data into a visual representation
- The ability to create alert rules with conditions and notification mechanisms like email, text/SMS, Slack, etc.
The old me would install a bunch of open source tools together on some server, then write scripts to curl, parse the response, stick the script(s) in a cronjob, etc. I could still do that, but that approach feels like a hack to me now. Times have changed and there are easier ways. In this article, I’ll show you how I used a synthetic monitoring tool – specifically Sematext Synthetics – to handle several use cases listed above.
What is Synthetic Monitoring?
The primary synthetic monitoring use case is monitoring the performance of websites or APIs.
When you are monitoring a website or an API performance with a synthetic monitoring tool, there are typically several metrics offered out-of-the-box such as various Core Web Vitals, page response times, availability, and more.
Conveniently, synthetic monitoring tools tend to provide exactly what we need:
- They are designed to test the website or API periodically, so they act a little like a cronjob, but without you needing to have access to any server to run that cronjob.
- Because they test websites and APIs, they obviously can fetch their content.
- Not all synthetic monitoring solutions let you parse out numerical data, but we’ll use Sematext Synthetics, which has this functionality (see XXXX documentation), and of course, you can take this extracted data and create dashboards with charts.
- Finally, alerting is table stakes for monitoring tools, and typically they integrate with multiple notification mechanisms.
Tips
Here are some best practices tips that apply to use cases like the ones described above. Follow these suggestions to make the best use of synthetic monitoring and keep your costs minimal.
- Use a single location. When monitoring websites and APIs you often want to do that from multiple locations, so you can test performance from different geographical locations or different parts of the internet. When using a synthetic monitoring tool for use cases described here you really need to use just one location.
- Use a long interval. When monitoring performance you typically want to be notified of performance degradations ASAP. However, when the goal is visualizing trends over longer periods of time, you typically don’t need to collect data frequently. So use the longest reasonable interval for running the monitor.
- Use the appropriate monitor. If you are extracting data from an API that returns JSON or XML, use the HTTP monitor. If you are extracting data from a web page that returns HTML or if you are looking to collect a performance metric from a web browser API then, of course, use the Browser monitor.
Summary
Who says synthetic monitoring tools have to be used only for monitoring performance? Think of them as a friendly cronjob running in the cloud. And because it’s all in the cloud it doesn’t require any installation – everything listed above can be done via the UI. There is nothing to install, update, upgrade, patch, manage, and, perhaps best of all, it’s all very affordable!