When I first learned about APIs, I didn’t really understand what they were all about. After working with them a few times, my basic description is that they’re a way for websites and apps to share specific parts of their code with the public, based on the specific syntax rules that they’ve allowed.
APIs can be accessed directly via code and indirectly via tools like Postman, which take care of some of the setup issues for you in a formal, organized way. I’ve specifically worked to access the Twitter API in the past, just for fun.
In the first step, is authentication or “proving that you are who you say you are.”
I used the POST Oauth 2.0 token method, but there are other options available. Inside Postman, I chose the used the “Basic Auth” option for authorization which is “proving that you are supposed to be accessing this specific resource.”
To do this, your Postman “username” will be the Consumer API key provided by Twitter, and the Postman “password” field is the Consumer API secret key.
Once you’ve finished authorization, you can go to the next step. In my case, it was using my app to send tweets to twitter. In this case, I chose the Oauth 1.0 authorization option from within Postman, with the option to add the authorization data to the request body/ request url. You’ll need to the Consumer API key and Consumer API secret key from your Twitter app, along with the Access token and the Access token secret.
From here, you go to the Body section in Postman and enter a value next to the Status key: this is your new tweet.
Next, I’ll be working to code my API calls directly from Python to see if I can schedule tweets and get around the limitations of Postman. See you then.
UPDATE: I found out how to connect to the Twitter API directly with python by importing the tweepy Python library. This allows me to send tweets every two minutes via my Twitter app by using the python time.sleep() delay function. All I had to do was store the tweets in a csv file and then load them into my code by looping through a Pandas data frame. Cool!
Update #2: My next step was to move this code into the cloud so that I didn’t have to keep my laptop on in order for it to run. I used AWS Lambda along with S3 storage buckets. The hard part was converting my code into the Linux OS format with Docker by using command line scripts. A Stack Overflow thread on lambda functions lead me to a GitHub page that already had the files available to download. From there I added the extra module folders required by tweepy to the parent directory folder of my main Python function and uploaded the zip to S3. Done! Next, I plan to test other cloud services like Azure and Google Cloud Platform on my upcoming projects to see if I can get around the Linux headache.