Monday, January 29, 2024

Experimenting with AI Diffusion

Code, notebook and live demo are available on my 🤗 HuggingFace Space

I made it to the Lesson 10 of the fast.ai course. Its homework was particularly fun and challenging. In a nutshell, the goal is to deconstruct an existing Diffusion model available on 🤗 HuggingFace and rebuild it component by component. This model is capable of generateing an image based on a text (prompt) and optionally an existing image. Just like this:
The first step is to use an Auto-encoder to reduce the amount of data used by image. We can encode an image to a smaller Tensor, then decode it back. It does not come back exactly the same, but the difference is marginal.


Can you spot the difference?

Now that we can work on smaller Tensor, we can efficiently generate images based on a prompt. We can control how closely the model should follow the prompt by setting the guidance parameter. In the picture below, I try different value for the guidance.

Often, the model will generate an image which does not match our expectation. We can use the negative prompt to constantly remind the model to not match specific criteria throughout the generation. In the picture on the left, we asked the model to generate "a beautiful tree" which came with a lot of green. If we don't want so much green, we can set "green" in the negative prompt.

Before the image generation starts, the model generates embeddings based on the prompt given. Multiple prompts can be used, and their respective embeddings can be merged. This method can generate some interesting pictures! We can also set weights on each prompt. In the example below, we generated 10 images with different weights.

If we just provide a prompt as input to the model, it will use an random noise picture as a starting point. But we can also provide an existing image as a starting point. To use this method, we need to first add some noise to the picture, then finish the image generation as usual. In the picture below, we show a starting image, the same image with the added noise, and on the right the final generated image. The prompt for it is: "a cute dog".

To go from the middle picture to the one on the right, the model progressively removes noise from the picture. Each step generates what is called a latent. We can capture these latents and turn them in images to show how the model progress through the image generation.

We can set the amount of noise added to the original picture. Here, we set different amount of noise (2nd row) and see the final image generated (1st row).

You can head to my HuggingFace Space to try it out!








Sunday, December 24, 2023

Can AI replace Captcha click farms?

The code and demo for this blog is available here on my Hugging Face Space

I recently took some great AI training (here and here). One of the courses is introduced with this xkcd joke from 2014:



It made me think: what task was considered impossible for a computer to accomplish in 2014, and can now be easily achieved using AI? 🤔

2014 was the year Google released reCaptcha v2 which became, by far, the most popular captcha solution. We've all seen these prompts:

To illegally promote companies, products, influence elections, or launder money, hackers need to create thousands of fake accounts. Captcha are meant to prevent this issue by ensuring a real human is behind the screen to verify the image. If a computer can pass the Captcha test, a hacker could automate the user creation process. I haven't tested these illegal services but it seems hackers hire Captcha click-farms where humans solve the Captcha tests for a fee ($3 for 1000 captcha  according to this F5 Labs article).

Now that we have self-driving cars on the road, there is no secret that computers can recognize traffic lights. However, these cars (e.g. Tesla and Waymo) are equipped with expensive hardware and backed by top engineering teams. What I was actually interested to see is whether I could easily built a cost-efficient AI tool to solve a Captcha test.

The Captcha test splits an image in 4x4=16 squares and asks users which squares contain a specific object (e.g. traffic lights). One approach would be to use an image classification model and run it 16 times on each sub square. But it would not be very accurate as a traffic light might go across multiple squares and it may not be possible to identify part of a traffic light as one.


It turns out humans are not the only one struggling with those 😅

My first approach: train my own AI model 

I started collecting online pictures and train my own model to solve the test. I used the duckduckgo-search python library with some keywords search such as:

  • cars and trucks on a road with a traffic light
  • cars at a traffic junction
  • cars and buses on a road at a stop
  • car traffic taken from dash cam
Here is code snippet for this:
from duckduckgo_search import DDGS

def search_images(keywords, max_images=1):
with DDGS() as ddgs:
return [r['image'] for r in ddgs.images(
keywords=keywords,
type_image='photo',
max_results=max_images )]

It was time-consuming to find relevant photos and label them. Some pictures were aerial pictures, some had watermarks, some were artistic photos, some were pictures of the dash cam instead of picture taken by dash cam, others were just irrelevant or duplicates. After quite some work, I was left with only about 30 descent pictures. 

This model did not perform well. One of the reasons is that the dataset was too small. I tried finding an existing dataset but it wasn't that easy. I submitted an application to access the CityScape dataset but by the time my access was granted, I already had another working solution.

Second Attempt: Use a pre-trained Image Segmentation Model

I looked on HuggingFace for an existing Image Segmentation model and found one from nvidia which is pre-trained on the city scape dataset. I tested it and found that it can identify traffic lights very well. I wrote a script script that follows these steps:

  • resize the image to 1024x1024 as expected by the model
  • use the transformer python library to instantiate the model and pass the resized picture as input
  • as the model returns a list of masks for each object it can identify, I take the mask for the traffic lights and ignore the others
  • finally, I calculate which of the 16 squares the traffic lights fall into.

You can see the code here.

I built this demo using Gradio (which you can find on my HuggingFace Space). Feel free to try it out but expect to wait a minute or so for the app to start, and it will be slow because it is running on a CPU. The white squares indicate where the traffic lights are located.


Conclusion: Can my model beat the click-farm?

I ran some basic performance tests on various hardware on Hugging Face and RunPod, then I calculated the cost of solving 1000 Captcha test.

ProviderDeviceSpecsTime to Solve (in Seconds) / ImageHardware Cost ($) / Hourcost ($) / 1000 captchas
ApplemacbookM3 Pro
18 GB RAM
9
Hugging FaceBase cpu2 vCPU
16 GB RAM
35
Hugging FaceCPU upgrade8 vCPU
32 GB RAM
240.030.20
Hugging FaceNvidia t4 medium8 vCPU
30 GB RAM
16GB VRAM
1.20.900.30
RunPodRTX 4000 Ada9 vCPU
50 GB RAM
10.150.05
RunPodRTX 409024 GB VRAM
46 GB RAM
16 vCPU
1.00.390.11
RunPodRTX 6000 Ada22 vCPU
50 GB RAM
0.850.690.16

As expected, the model runs much faster on GPU than CPU. The GPUs are more expensive per hour, but cheaper per 1000s Captcha images. Assuming a hacker would not care about security and reliability (which is debatable), I used spot (i.e. interruptible), community (i.e. run by 3rd parties) GPU instances.

The cheapest option I found was the RTX 4000 Ada, which can process each image in 1 second and costs $ 0.15 per hour, which translate is a cost of $0.05 for each 1000 captchas processed.

This is still more expensive than the price of $0.003 per image charged by click farms, but I haven't tried hard to optimize my model.

Captcha services like Google ReCaptcha or hCaptcha have others security controls and risk assessment methods which are out of scope of this blog. My focus here is only on the traffic light identification task as it is really just an excuse for me to practice what I learned in AI courses. 

Still, if you think of security as an onion, captchas look like a very thin outer layer. It may prevent unskilled hackers to bypass them, but it won't hold against a motivated hacker with basic AI knowledge.



Wednesday, September 21, 2022

Configure Amazon SageMaker Studio to auto-shutdown unused compute resources and install developer productivity extensions

I have been learning and exploring Machine Learning lately. As I experimented with various tutorials and Jupyter notebooks, I found that Amazon SageMaker Studio was great for productivity. Simply put, it is a cloud IDE for Machine Learning.

There are many great extensions available to enhance the developer experience. For example, the LSP extension enables code auto-completion.


In the example above, the auto-completion helps me navigate the class and functions available. Also, pyflakes indicates I imported the sagemaker module but I am not using it, which helps me to keep my code clean.

Also, I (painfully) learned that it is easy to forget a running instance. From SageMaker Studio, you can launch services such as SageMaker Training Job, Hyperparameter tuning job or Serverless Inference. With these services, you will pay only for what you use, which is great. But SageMaker Studio can run multiple instances for the various notebooks you are working on. And if you are training your model directly in the notebook, you will need a beefy instance. In my case, I was using a ml.g4dn.xlarge instance which has 4 vCPU, 16 Gb of RAM and 1 GPU. If I let it run 24/7, it will generate unnecessary costs and CO2 emissions.

To enable the developer productivity extensions and configure the server to automatically shut down all SageMaker Studio compute resources, you have to configure a Lifecycle Configuration. There are other ways to install these extensions, but they won't persist after a shutdown, so I don't recommend using them.

This repo contains some examples of scripts to be used with the Lifecycle Configuration. For this blog, we will enable the LSP Server and auto-shutdown extensions. I tweaked the examples from this repo and combined them in a single bash script.

The first step is to configure the server timeout. After 45 minutes of inactivity, I want the server to shut down.


#!/bin/bash
# This script installs the idle notebook auto-checker server extension to SageMaker Studio
# The original extension has a lab extension part where users can set the idle timeout via a Jupyter Lab widget.
# In this version the script installs the server side of the extension only. The idle timeout
# can be set via a command-line script which will be also created by this create and places into the
# user's home folder
#
# Installing the server side extension does not require Internet connection (as all the dependencies are stored in the
# install tarball) and can be done via VPCOnly mode.

set -eux

# timeout in minutes
export TIMEOUT_IN_MINS=45
Next, the script prepares the installation of the shutdown.

# Should already be running in user home directory, but just to check:
cd /home/sagemaker-user

# By working in a directory starting with ".", we won't clutter up users' Jupyter file tree views
mkdir -p .auto-shutdown

# Create the command-line script for setting the idle timeout
cat > .auto-shutdown/set-time-interval.sh << EOF
#!/opt/conda/bin/python
import json
import requests
TIMEOUT=${TIMEOUT_IN_MINS}
session = requests.Session()
# Getting the xsrf token first from Jupyter Server
response = session.get("http://localhost:8888/jupyter/default/tree")
# calls the idle_checker extension's interface to set the timeout value
response = session.post("http://localhost:8888/jupyter/default/sagemaker-studio-autoshutdown/idle_checker",
            json={"idle_time": TIMEOUT, "keep_terminals": False},
            params={"_xsrf": response.headers['Set-Cookie'].split(";")[0].split("=")[1]})
if response.status_code == 200:
    print("Succeeded, idle timeout set to {} minutes".format(TIMEOUT))
else:
    print("Error!")
    print(response.status_code)
EOF
chmod +x .auto-shutdown/set-time-interval.sh

# "wget" is not part of the base Jupyter Server image, you need to install it first if needed to download the tarball
sudo yum install -y wget
# You can download the tarball from GitHub or alternatively, if you're using VPCOnly mode, you can host on S3
wget -O .auto-shutdown/extension.tar.gz https://github.com/aws-samples/sagemaker-studio-auto-shutdown-extension/raw/main/sagemaker_studio_autoshutdown-0.1.5.tar.gz

# Or instead, could serve the tarball from an S3 bucket in which case "wget" would not be needed:
# aws s3 --endpoint-url [S3 Interface Endpoint] cp s3://[tarball location] .auto-shutdown/extension.tar.gz

# Installs the extension
cd .auto-shutdown
tar xzf extension.tar.gz
cd sagemaker_studio_autoshutdown-0.1.5

# Activate studio environment just for installing extension
export AWS_SAGEMAKER_JUPYTERSERVER_IMAGE="${AWS_SAGEMAKER_JUPYTERSERVER_IMAGE:-'jupyter-server'}"
if [ "$AWS_SAGEMAKER_JUPYTERSERVER_IMAGE" = "jupyter-server-3" ] ; then
    eval "$(conda shell.bash hook)"
    conda activate studio
fi;

pip install --no-dependencies --no-build-isolation -e .
jupyter serverextension enable --py sagemaker_studio_autoshutdown
Then we install the LSP

# Install:
# - The core JupyterLab LSP integration and whatever language servers you need (omitting autopep8
#   and yapf code formatters for Python, which don't yet have integrations per
#   https://github.com/jupyter-lsp/jupyterlab-lsp/issues/632)
# - Additional LSP plugins for formatting (black, isort) and refactoring (rope)
# - Spellchecker for markdown cells
# - Code formatting extension to bridge the LSP gap, and supported formatters
echo "Installing jupyterlab-lsp and language tools"
pip install jupyterlab-lsp \
    'python-lsp-server[flake8,mccabe,pycodestyle,pydocstyle,pyflakes,pylint,rope]' \
    jupyterlab-spellchecker \
    jupyterlab-code-formatter black isort
# Some LSP language servers install via JS, not Python. For full list of language servers see:
# https://jupyterlab-lsp.readthedocs.io/en/latest/Language%20Servers.html
jlpm add --dev bash-language-server dockerfile-language-server-nodejs

# This configuration override is optional, to make LSP "extra-helpful" by default:
CMP_CONFIG_DIR=.jupyter/lab/user-settings/@krassowski/jupyterlab-lsp/
CMP_CONFIG_FILE=completion.jupyterlab-settings
CMP_CONFIG_PATH="$CMP_CONFIG_DIR/$CMP_CONFIG_FILE"
if test -f $CMP_CONFIG_PATH; then
    echo "jupyterlab-lsp config file already exists: Skipping default config setup"
else
    echo "Setting continuous hinting to enabled by default"
    mkdir -p $CMP_CONFIG_DIR
    echo '{ "continuousHinting": true }' > $CMP_CONFIG_PATH
fi
Finally, the script restart the server and configure the timeout.

if [ "$AWS_SAGEMAKER_JUPYTERSERVER_IMAGE" = "jupyter-server-3" ] ; then
    conda deactivate
fi;

# Restarts the jupyter server
nohup supervisorctl -c /etc/supervisor/conf.d/supervisord.conf restart jupyterlabserver

# Waiting for 30 seconds to make sure the Jupyter Server is up and running
sleep 30

# Calling the script to set the idle-timeout and active the extension
/home/sagemaker-user/.auto-shutdown/set-time-interval.sh
Now that we have the script, we can load it using the AWS CLI or from the console. Since there is already an example with the CLI here, I will show how to do this with the console.

I already have Sagemaker Studio configured and working. In the AWS console for SageMaker Studio, click on the Attach button of the Lifecycle Configurations section.


Select New configuration. The configuration type is Jupyter server app. And I gave it the name autoshutdown-and-lsp.
Copy and paste your script and click on the button Attach to domain.

The Lifecycle Configuration should now be visible.


Select the script and click on the button Set as default


From the console, you can see what is running for a user but it is not telling which instance types are used.

Here is an example showing all compute resources shut down.

If SageMaker Studio is running, you can also see which Apps and Kernels are running.

If you search for lsp in the Extension section, you will see it has been automatically installed.

Finally, you can see the logs from your bash script in Cloudwatch. The log group is called /aws/sagemaker/studio and the Log streams ends with /LifecycleConfigOnStart.
 

Note that if you want to update your script, you will have to start from step 1. You can't update a script in a Lifecycle Configuration that has already been created. For this reason, I recommend creating a pipeline using the AWS CLI to manage this process. Also, you could store the settings such as the timeout in AWS System Manager Parameter Store. Another approach would be to make the bash script generic and make it download and execute the necessary updatable scripts.

Update 09/22: I realized I had another incurring charges from a ml.p3.2xlarge instance but I did not have any visible inference host in the console. The support team advised me to delete all endpoint configurations, models, users and domains from the console. In the Cost Explorer, you can filter per instance type to understand what is generating the cost. Additionally, I set up a daily and monthly billing alert.





Saturday, January 14, 2017

Encrypt your MSSQL database with TDE and SafeNet KeySecure, and why!


One of the easiest way to encrypt a MSSQL (or Oracle) database is to use TDE - Transparent Data Encryption. TDE requires the higher end enterprise MSSQL license and requires a DBA to execute SQL commands.

Why should you encrypt your database? If a hacker gets into your network, he may be able to steal a copy of the database or parts of the database. If the data is confidential or under HIPAA, SOX, CJIS or one of the many regulations out there, it can become quite a headache.

One of the benefit of TDE is that the application querying the database does not need to be aware the database encryption: it is transparent to the application. If you have an existing application and database, you can enable TDE on the database without downtime and without changing the code of the application.

But there is catch! By default, MSSQL (or Oracle) stores the encryption key in software on the same machine, so it is not protected and not physically separated from the data. Again, if a hacker has access to you network and access to the data, he will have access to the key next to it so he can just decrypt the data. It is basically like leaving the key on the door of your car. Might as well not lock it!

SafeNet KeySecure solves this issue by keeping a Key Encryption Key outside of the database. In the video below, I walk you through the step of encrypting a SQL database with KeySecure. We look at the MDF and backup files in a text editor before and after encryption to prove the data is being encrypted. We also look at how the access to key in KeySecure is being logged.


/****************************************************/
/* Dummy database */
/****************************************************/
USE master;

CREATE DATABASE SampleDBwithPII;
GO

USE SampleDBwithPII;

Create table Customers (Id int not null, Name varchar(max) not null, Address varchar(max) not null, SSN varchar(max) not null);
GO

INSERT INTO Customers values (2, 'Matt Buchner', 'Arboretum Plaza II, 9442 Capitol of Texas Hwy, 78759 Austin TX', '111-222-3333');



/****************************************************/
/* PREP FOR TDE */
/****************************************************/


/*  enable EKM - Extensible Key Management
 you must be a sysadmin
*/

USE master;

GO 

sp_configure 'show advanced options', 1;
RECONFIGURE;

GO 

sp_configure 'EKM provider enabled', 1;
RECONFIGURE;

GO 


/* Load KS EKM - must be a sysadmin */
/* After running this command, check Security\Cryptographic Providers */
CREATE CRYPTOGRAPHIC PROVIDER safenetSQLEKM
FROM FILE = 'C:\Program Files\Safenet\SQLEKM\safenetsqlekm.dll'

GO 

/* The credentials below should match the credential in KS */
/* After running this command, check Security\Credentials */
CREATE CREDENTIAL EKMCred WITH IDENTITY='tdeuser', SECRET='P@ssw0rd'
FOR CRYPTOGRAPHIC PROVIDER safenetSQLEKM

GO 

ALTER LOGIN sa ADD CREDENTIAL EKMCred
/* example with Windows credentials
ALTER LOGIN [GTOLAB\mbuchner] ADD CREDENTIAL EKMCred;
*/

GO 

/* create a key in KS and create a reference to it in MSSQL */
CREATE ASYMMETRIC KEY SQL_EKM_RSA_2048_Key
FROM Provider safenetSQLEKM
WITH ALGORITHM = RSA_2048,
PROVIDER_KEY_NAME = 'MSSQL_TDE_EKM_RSA_2048_Key',
CREATION_DISPOSITION=CREATE_NEW

GO 

/* reuse the existing key in other cluster nodes 
CREATE ASYMMETRIC KEY SQL_EKM_RSA_2048_Key
FROM Provider safenetSQLEKM
WITH ALGORITHM = RSA_2048,
PROVIDER_KEY_NAME = 'MSSQL_TDE_EKM_RSA_2048_Key',
CREATION_DISPOSITION=OPEN_EXISTING
*/


/* check the keys have been created */
Select * from [master].[sys].[asymmetric_keys]



/****************************************************/
/* HOW TO CONFIGURE TDE */
/****************************************************/
USE master;

GO 

CREATE CREDENTIAL EKMCredTDE
WITH IDENTITY = 'tdeuser',
SECRET = 'P@ssw0rd'
FOR CRYPTOGRAPHIC PROVIDER safenetSQLEKM ;

CREATE LOGIN tde_login
FROM ASYMMETRIC KEY SQL_EKM_RSA_2048_Key ;
GO

ALTER LOGIN tde_login
ADD CREDENTIAL EKMCredTDE;
GO


/* connect to our database */
USE SampleDBwithPII ;
GO

/* create symmetric encryption key */
CREATE DATABASE ENCRYPTION KEY
WITH ALGORITHM = AES_256
ENCRYPTION BY SERVER ASYMMETRIC KEY SQL_EKM_RSA_2048_Key ;
GO

/* enable encryption */
ALTER DATABASE SampleDBwithPII
SET ENCRYPTION ON ;
GO

/* query encryption state */
SELECT DB_NAME(e.database_id) AS DatabaseName, e.database_id, e.encryption_state,
CASE e.encryption_state
 WHEN 0 THEN 'No database encryption key present, no encryption'
 WHEN 1 THEN 'Unencrypted'
 WHEN 2 THEN 'Encryption in progress'
 WHEN 3 THEN 'Encrypted'
 WHEN 4 THEN 'Key change in progress'
 WHEN 5 THEN 'Decryption in progress'
END AS encryption_state_desc, c.name, e.percent_complete
FROM sys.dm_database_encryption_keys AS e
LEFT JOIN master.sys.asymmetric_keys AS c
ON e.encryptor_thumbprint = c.thumbprint

SafeNet Authentication Service video demos

Below are a couple of video demos which I made for my work as Sales Engineer at Gemalto.


  • Demonstration of the SAS integration with the Microsoft Remote Desktop Gateway and Remote Desktop WebAccess. The most interesting part is when we show how the 2nd factor authentication can be bypassed by clicking directly on a cached RDP file.


  • Demonstration of the SAS integration with Netscaler using SAML, where SAS is the IdP and Netscaler is the SP.


  • Demonstration of the integration of SAS with Salesforce.com. The first video shows the configuration, the 2nd video shows the authentication user experience.




  • Demonstration of the integration of SAS with Twillio Programmable Voice APIs. The authenticating user receives a call and Twillio plays the 6 digit code the user need to authenticate.


  • Demonstration of SAS integration with Linux PAM.