Scenario

In this article I will discuss a client that needs a CSV to list all the billing data activities from an API. The data can be easily pulled and put into a CSV by Logic Apps, but the API resource returns 79,871 rows of data and the API call returns it 200 rows at a time. Then the API provides the next page of data with 200 more rows. That would mean dealing with 400 separate CSVs in order to have all the data present.

Problem

The challenge was having Logic Apps create one CSV that required no user editing and had all the data from the API in it. This meant the first API data pull had to have its headers set, but the subsequent API data pulls could not have headers. Solving this problem would save time and allow all data to be pulled from an API.

Solution

Use a for each loop in Logic Apps to append the csv data in one file until all the data is copied over. The goal is to create a variable that has all the API calls data without headers in it and create a OneDrive file with that data to our specified folder so it can be renamed. This is achieved by using Split and Trim in Logic apps. 

Note: This assumes you have an Azure subscription active, a OneDrive account, and knowledge of the API that will be used.

Create the initial API call

Step 1 

Create the trigger for your Logic App. I chose a recurrence trigger because this CSV will be updated weekly.

Step 2

Create the GET HTTP Request

First, this API uses OAuth2 which is why the header has Bearer Access Token. Second, in this API you must declare which fields will be returned.

Step 3

Parse the HTTP Requests
In this step I create a Parse JSON command. Its content should be the body from the HTTP Requests. This is where we get the next page URI, and the data to input into the CSV.

{
  "type": "object",
  "properties": {
    "data": {
      "type": "array",
      "items": {
        "type": "object",
        "properties": {
          "activity_description": {},
          "date": {
            "type": "string"
          },
          "matter": {},
          "quantity_in_hours": {
            "type": "number"
          },
          "type": {
            "type": "string"
          },
          "user": {
            "type": "object",
            "properties": {
              "name": {
                "type": "string"
              }
            }
          }
        },
        "required": [
          "quantity_in_hours",
          "date",
          "type",
          "activity_description",
          "matter",
          "user"
        ]
      }
    }
  }
}

Step 4

Create variable for the loop to continue.

 "meta": {
    "paging": {
      "next": "https://app.clio.com/api/v4/activities.json?fields=quantity_in_hours%2Cdate%2Ctype%2Creference%2Cactivity_description%7Bname%7D%2Cmatter%7Bdisplay_number%7D%2Cuser%7Bname%7D&limit=200 "
    },
    "records": 79871
  }

I used this parsed JSON data to create my Next variable. This will be used in the loop to let it know when there are no more pages of records.

Note: The API I am using has a next page token, if that was not available, I would create a count variable and increment it inside my loop.

Step 5

Create a CSV table from data operations and use the data from parsed JSON to create the CSV table.

Create two more variables and name one of them CSV and the other CSVloop. Set CSV to the data from the CSV table that was created.

Creating the Until Loop

  1. Create until loop and HTTP loop requests.
  2. Repeat step 3 from the initial API call to parse the HTTP request.
  3. Create a new CSV table
  4. Set CSVloop variable to CSV table
  5. Compose the CSVloop with the split function
  6. Create a for each loop
  7. Append CSV with CSVloop
  8. Set the next variable to the JSON parsed next
  9. Set folder path to where you want it
  10. Choose a name for the file and set .csv at the end and the file content is the CSV variable.

Virtuas

Virtuas

Our team @Virtuas