![]() ![]() Once you’ve signed into your JAWA server navigate to the Webhooks section of the server, click on Custom, and finally click on Create in the nav bar at the top. Now that we have our script completed to patch to Jamf Pro we’ll need to create a custom webhook in the JAWA server and upload our script to update the computer record in Jamf Pro. #print(resp.status_code, resp.text) # print the status code and response body Json_data='", json=data, headers=headers) ![]() This policy can be set to run only once per computer, or you can have it run once a day/week/month so that the proper computer ID is always on the caching server (although the ID should not change unless a device is re-enrolled). We can grab the computer ID using the Jamf binary on the device, and we can pass the EA ID as part of the policy to deploy our LaunchDaemon and update script on the endpoint.Ĭreate a policy in Jamf Pro that can run the following script. Since we do not want to make unnecessary API calls to Jamf Pro and do not want to pass API credentials (or store them) for Jamf Pro to an endpoint, we need a way to capture the computer ID and the EA ID and store them locally. We will combine the output from that command with an if/else condition to set the proper byte size (Bytes, KB, GB, etc) and then send that to JAWA via the LaunchDaemon. LastHour="$(sqlite3 "/Library/Application Support/Apple/AssetCache/Metrics/Metrics.db" "select SUM(ZBYTESFROMCACHETOCLIENT) from ZMETRIC where cast(ZCREATIONDATE as int) > (select strftime('%s','now','-1 hour')-978307200) ")" The command to gather the cache statistics that we are interested in, basically the stats for the last hour, can be done using this bit of code: A Python script tied to the webhook to send the data to Jamf Pro.A custom webhook in JAWA to receive the data from the endpoint.A LaunchDaemon on the endpoint to run the script.The JSS ID (computer ID) and the ID of the Extension Attribute stored somewhere on the endpoint.Script on the endpoint to capture the statistics and send to JAWA.In order for us to pull this off, we’ll need a few things: Also, there’s no easy way to generate an inventory update more frequently than once a day. In a larger environment (say over 500 devices), this can cause some stress. As you probably know, generating an inventory update can be very chatty and cause a lot of other things to happen on the Jamf Pro server (like recalculating all Smart Groups). They were already gathering the data as an Extension Attribute but wanted the data to update more frequently than every inventory. They needed to report on the usage of their caching servers in the field. I was presented with a problem by one of my customers. In this post we’ll dive into the problem I was recently trying to solve for one of my customers. Please see the Wiki for instructions on using both the standalone scripts, jamf-upload.sh, the AutoPkg processors, and other tips and tricks.In my previous post, Intro to JAWA – Your Automation Buddy, I went over what JAWA is, a little bit about why it was created, and some about how to get started with it. These are now deprecated and require a python 3 installation. The standalone_uploaders folder contains standalone scripts that do the same thing as the AutoPkg processors. The jamf-upload.sh script can be used to take advantage of the JamfUploader processors without needing any AutoPkg recipes. Please see the Wiki for instructions on using the AutoPkg processors. Identical copies of the processors are hosted in the autopkg/grahampugh-recipes repo, in the JamfUploaderProcessors folder). This repo contains the sourcecode of the JamfUploader processors. Most of these processors are concerned with uploading things to a Jamf Pro Server. JamfUploader is a name given to a set of AutoPkg Processors designed to interact with the Jamf Pro APIs. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |