June 7, 2023
Simple Secure Code ChatGPT Demo
Comments
(0)
June 7, 2023
Simple Secure Code ChatGPT Demo
ColdFusion developer for 20+ years, professional experience in 10 other languages & frameworks. Artist, nerd, Jeep enthusiast.
Newbie 35 posts
Followers: 26 people
(0)

If you haven’t heard, there’s this cool new thing the kids are doing called “AI”. I’m not sure what that stands for, but it is very popular.

One of these AIs is a Large Language Model called ChatGPT, which comes in a variety of flavors (and costs!). Today I wanted to show you how easy it is to build a simple little app that will send the code in one your files in to ChatGPT and have it check for security concerns.

We’re going to be using Adobe ColdFusion’s powerful file capability here, in particular the fileRead() function. Let’s dive into the code (please assume this code is inside a <cfscript> block).

var filePath = "C:\ProjectFortuna\cfusion\wwwroot\cffiledemo\demofile.cfm";
We start by defining a path to the file. In this case I am hard coding this, but you could, of course, also use a file picker in a form to grab a file’s location, or even use cffile to list out the contents of a directory so you could pick the file to read. But I wanted to start simply so we’re pointing to a demo file in a directory in my ColdFusion WWWRoot directory, inside a folder called ccfiledemo.
By the way, here is what the code in that file looks like:
<cfquery name="demoquery" datasource="demosource">
SELECT * 
FROM tbl_main
WHERE id = #URL.id#
</cfquery>
<cfdump var="#demoquery#" />
If you just said “yikes” a couple times, that’s probably good.
The way ChatGPT works is via what are known as “prompts”. Essentially, it is natural language processing. You ask it a question, it answers. So what we are going to do here is ask it a question in a hard coded way, then add the content of the file, and then pass the whole kit and caboodle to ChatGPT and see how badly we’ve done our job.
And so:
var fileContents = "Is the following code secure? If not, explain why." & EncodeForHTML(fileRead(filePath));
That will be the prompt. I should add, it is a very simple prompt, and you can feel free to change up the wording, but this is fairly concise (which is important, as I will soon explain) and produces a good enough answer for our needs today.
Then we will use CFHTTP to make our API call over to OpenAI’s ChatGPT endpoint. So this is just a bit of general, boring, housekeeping:
httpService.setUrl("https://api.openai.com/v1/chat/completions");
httpService.setMethod("post");
httpService.addParam(type="header", name="Content-Type", value="application/json");
Note that the service URL (the “endpoint”) is different for the different language models and products available from OpenAI. The params are also different. Speaking of which…
httpService.addParam(type="header", name="Authorization", value="#APPLICATION.openaikey#");
You will need an API key to call this endpoint. You can learn about getting your key here: https://openai.com/blog/openai-api
Here is where the fun really begins:
httpService.addParam(type="body", value='{
"model": "gpt-3.5-turbo",
"messages": [{"role": "user", "content": "#fileContents#"}]
}');
We pass in the model we want to use (in this case I believe several of the 3n ChatGPT models use this endpoint) and then we pass in the role and the “content” which is the prompt and code we read in earlier.
apiResponse = httpService.send().getPrefix);
writeDump(apiResponse.filecontent);
Then we send in the request and dump the results.
Let’s check out what ChatGPT thinks of our incredible code. Oh. Gosh. It doesn’t seem to like it at all.
{"id":"chatcmpl-7Oumow5OU9DUIBT0zvhj69pyGDEGE","object":"chat.completion","created":1686172514,"model":"gpt-3.5-turbo-0301","usage":{"prompt_tokens":138,"completion_tokens":63,"total_tokens":201},"choices":[{"message":{"role":"assistant","content":"The code is not secure because it is vulnerable to SQL injection attacks. The query includes directly user-inputted values from the URL without any sanitization or validation, which can allow malicious users to manipulate the query to execute unintended commands. A safer approach would be to use prepared statements or input validation to mitigate this risk."},"finish_reason":"stop","index":0}]}
So it looks like maybe we need to head back to those OWASP classes after all! Something to note here are the “token” counts. ChatGPT is paid via amount of tokens used, which was why I was trying for a concise prompt. As you can see, the total tokens used here was 201. Not bad on 3.5 Turbo, but costs can add up in something like 4.
So be sure you know your rate limits and costs to avoid unexpected bills at the end of the month! Happy AIing!
0 Comments
Add Comment