pocketlife is a telemetry module used to gather info on the runtime of Python programs. It uses wrappers functions and system calls to gather data organized into three main classes: User, Network, and Application. Secondary classes include Global, and Fetch. Fetch is used to return data via one off functions. Global class functions are wrappers used to genericize Fetch class functions.
There are a few obvious features of this telemetry software. CPU, RAM, and OS checking/fingerprinting all exist. My personal favorite, and the most useful feature, is the function tracer. The function tracer takes the start and end time of a function, along with resource usage and arguments, and POSTs that to a database. This can be invoked using the @pocketlife.Application.FunctionTrace
wrapper above a function:
import os
import pocketlife
POCKETLIFE_USERNAME = "APIUser"
POCKETLIFE_PASSWORD = "SecurePassword123"
POCKETLIFE_HOSTNAME = "https://pocketlife.xyz/FileProject.php"
@pocketlife.Application.FunctionTrace
def FileSizeChecker(filepath):
return os.path.getsize(filepath)
FileSizeChecker("/home/testuser/massive_file.txt")
FileSizeChecker("/home/testuser/tiny_file.txt")
Upon running, the following output is provided.
{"result": 10737418240, "function_name": "FileSizeChecker", "execution_time": "0.0002", "cpu_usage_change": "0.00", "ram_usage_change": "0.00", "function_arguments": "{\"args\": [\"/home/stduser/massive_file.txt\"], \"kwargs\": {}}"}
{"result": 10485760, "function_name": "FileSizeChecker", "execution_time": "0.0001", "cpu_usage_change": "0.00", "ram_usage_change": "0.00", "function_arguments": "{\"args\": [\"/home/stduser/tiny_file.txt\"], \"kwargs\": {}}"}
All calls to the telemetry server are handled using JSON POSTs. JSON data is validated, then handed off to the MySQL DB for storing. I have the PHP API and SQL code hosted in the GitHub repository linked below.
Two types of authentication are used:
1.1 Program -> POST API: Basic Auth
1.2 POST API -> SQL Database: Basic Auth
An obvious limitation is the first authentication type (Program -> POST API) credentials must be embeded in variables hardcoded in the traced Python program. Even if you "compile" the program using Pyinstaller or another obfuscation technique, there's some possibility to retreive them using binary analyzers/decompilers. Best practice would be to ensure proper JSON handling is done on the PHP side (which I have done already), and to ensure PHP API perms are as low-privilege as possible (as in it only does what's needed). If that's implemented, even if someone grabs the credentials, they aren't able to POST much without knowing the source payload limitations, or aren't able to POST anything of importance beside useless data.
IP whitelisting/fingerprinting using some hardware identifier could be something to look into as well, but that's out of scope for this basic system. That could limit external uses for this in the case of the credential set leaking.
POST API credentials are hosted directly in the PHP code. Optimally, this would be stored as a hash in the database, then compared with the POSTed password on authentication attempt. To keep the scope small, and reducing overall logic, I skipped out on this, but it's not a hard implementation. I'll likely do it in the future.
SQL database user credentials are hardcoded directly into the API code. This user handles all data upload into the database after JSON validation is done in the PHP code. Having a different set of complex username/password in each of the files is important. Having a strict set of permissions assigned to the database user is as well:
When creating your database user, this is an example of what you'd want.
CREATE USER 'TNdR4qtupsPMqSv7xfgv'@'localhost' IDENTIFIED BY 'PwSk2Yf0BbvjUfA4G6Tu';
GRANT INSERT, UPDATE ON your_database.* TO 'TNdR4qtupsPMqSv7xfgv'@'localhost';
FLUSH PRIVILEGES;
This is a write-only user. Because the API will only be used for INSERTing JSON into the database, it makes more sense to limit what it can do in the case of credentials leaking. This shouldn't matter anyway, as it's only a localhost call. Remote calls over the internet (non-PHP SQL calls) or over the LAN won't work.
UPDATE can most likely be ommitted in the user permission assignment. It's not used in the API currently, but was added in case it's needed in the future (for whatever reason).
A primitive custom queuing system exists to ensure successful capture of telemetry in the case of network outage or offline usage. Basically, if you're offline, it appends to a .JSON file found in your home directory (configured through a global variable in the module). When the program is run, the queue attempts to be purged first (if it exists), then the new telemetry entries. The queue, as is standard in queuing software, works using a FIFO (First In, First Out) model. This ensures at least "somewhat" accurate timestamping on the database side in terms of program task execution.
API code generally functions as follows:
These three classes/functions used to genericize the Fetch class functions.