With the addition of the new test networks to replace the sandbox environment networks, we’d like to do a review of how to test your forecasting on test networks. Since the test networks don’t serve ads, you have no traffic history to provide forecasting data. Instead, the API returns predictable results in the Forecast object so you can effectively test your application.

Forecast Service with Test Networks

The forecast service can be used by calling getForecast with a new line item or getForecastById with the ID of an existing line item. In production, these calls will tell you whether you can expect the line item to deliver the booked clicks/impressions based on how your network’s done in the past. In the test network, the only two parameters that affect the forecast results are the lineItemType and unitsBought fields on the line item. The expected responses are summarized below and in our documentation.

Input (Line Item Field) Output (Forecast Field)
lineItemType unitsBought availableUnits forecastUnits
Sponsorship 50 1,200,000 6,000,000 600,000
prospective: 0
Sponsorship != 20 and != 50 1,200,000 1,200,000 600,000
prospective: 0
Not Sponsorship <= 1,000,000 unitsBought * 2 availableUnits * 3 600,000
prospective: 0
Not Sponsorship > 1,000,000 unitsBought * 2 availableUnits * 3 600,000
prospective: 0

It is a good idea to include a test case where the forecasting service throws an exception so your application can handle error conditions correctly. You can trigger the service to throw a SERVER_NOT_AVAILABLE error by setting the line item type to sponsorship and the units bought to exactly 20.

Dealing with Production Quota

In addition, please be mindful that in production (as opposed to test networks), forecasting is a resource intensive process on the server and too many back-to-back requests may cause the API to throw an EXCEEDED_QUOTA error. When this occurs, we recommend backing off briefly before retrying to bring the requests per second down. Recall that the quotas are based on the number of requests per second rather than an absolute number of requests. You are less likely to get rate limited by the API with a steady stream of requests rather than with short bursts of many requests.

We appreciate any feedback from you regarding features you’d like us to highlight or about the API in general. Please don’t hesitate to leave us a suggestion on our forum or come chat with us at one of our DFP API Office Hours Hangouts.