
Despite having AI in the company name, I’m somewhat critical of the benefits provided of generative AI/Large Language Models (LLMs). I find that the answers often leave something to be desired, and the general consensus now seems to be that you need RAG (retrieval augmented generation) to get reliable, correct answers when asking knowledge questions, or reasoning models for questions that need to derive answers from existing knowledge.
The claims as to the benefit of LLMs for coding differ wildly. Some claim a 30-40% acceleration/improve in efficiency, others even proclaim the death of the programmer as a profession, whereas a METR study concluded that AI assistance actually slows coders down by 20%. My own results have been mixed – I’ve gotten code that was almost directly usable, but also results that clearly suffered from a lack of comparable training data and thus were unusable.
The goal
I wanted a simple Python script that would allow me to copy video files from my surveillance camera to an off-site ftp server every time a new clip was recorded, without delay. So running traditional backup tools like rsync would not be the solution. The prompt that I used was
can you write me a python script that monitors a directory and subdirectories and uploads new files to a ftp server
I used this prompt to ask the free version of ChatGPT, Google Gemini, and Github Copilot from within Visual Studio Code (using the ChatGPT 5 Mini model).
The results
The full code for all three resulting scripts can be found at the bottom.
- ChatGPT generated by far the shortest code, 70 lines, with almost no comments.
- Gemini generated about twice that (153 lines), but with extensive comments and logging functionality.
- The Copilot result was the longest at 240 lines, due to using a queue for uploads, checking whether the ftp connection was up and reconnecting if necessary before an upload, and checking whether a new file was still being written.
Especially the last check was something that was missing from the ChatGPT and Gemini results. Both simply paused for 0.5s after a new file was detected before uploading. This might work in many cases, but if writing clips is implemented as direct streaming to disk instead of buffering in RAM first, this will cause errors. So I went with the Copilot results.
There are other differences between the three services. What I like about ChatGPT is that it proposes to generate improvements or extended functionality that often make sense. Gemini gives a short description of what the components in the script do and a reminder to change the login credentials before actually using the script. With Copilot, I find it more difficult to interactively modify the code to the desired result.
All three services were smart enough to implement generation of the required directory structure, and reminded me that I needed to install the watchdog package.
Getting it to work
I added my credentials to the Copilot script and tried to run it. The first attempt failed with the message that unencrypted ftp was not allowed. ChatGPT quickly told me what was needed to get FTP over TLS to work, though Copilot could have done it as well (and directly change the code inside Visual Studio Code).
After that, uploads worked, but each new file would cause creation of the directory structure not from the root directory, but from the directory where the previous file was uploaded to. As it turned out, the script removed the / from the root path to avoid double slashes, which caused the cwd command not to switch to the root directory. With that rectified, the script worked fine.
Conclusions
With this relatively simple task, it was quite easy to get a working result. Typical boilerplate code like generating the required directory structure would definitely have taken longer to write manually. On the other hand, it did not surprise me that a bit of fine-tuning was required to actually get the script to work. It’s interesting to see that three different services (with Copilot actually using a ChatGPT model) also provide three different results, with the Copilot script clearly being the most complete.
So, is AI help useful for coding? In this case clearly yes – but you still need to be able to understand the code to make the necessary changes to get the code to do what it is supposed to do. Also, this was a simple question that can be solved with 200 lines of code. Don’t expect similar results when asking for a larger application or solutions to more complex coding problems. I think that it’s still best to have a human take the architect role and define the structure of a program, and give only small isolated tasks to the LLM for code generation. I also think that testing becomes even more important to make sure that the code actually does what it is supposed to do.
Code
ChatGPT:
import os import time from ftplib import FTP from watchdog.observers import Observer from watchdog.events import FileSystemEventHandler # === CONFIGURATION === FTP_HOST = "ftp.example.com" FTP_USER = "your_username" FTP_PASS = "your_password" FTP_BASE_DIR = "/remote/path" # Remote directory on the FTP server LOCAL_WATCH_DIR = "/path/to/local/folder" # Directory to monitor # ====================== def upload_to_ftp(local_path): """Upload a file to the FTP server, preserving relative folder structure.""" rel_path = os.path.relpath(local_path, LOCAL_WATCH_DIR) remote_path = os.path.join(FTP_BASE_DIR, rel_path).replace("\\", "/") with FTP(FTP_HOST, FTP_USER, FTP_PASS) as ftp: print(f"[+] Connected to FTP: {FTP_HOST}") try: # Ensure directory structure exists dirs = remote_path.rsplit('/', 1)[0] for part in dirs.split('/'): if part: try: ftp.mkd(part) except Exception: pass # Ignore if it already exists ftp.cwd(part) # Upload file with open(local_path, "rb") as f: ftp.storbinary(f"STOR {os.path.basename(local_path)}", f) print(f"[↑] Uploaded: {rel_path}") except Exception as e: print(f"[!] FTP upload failed for {local_path}: {e}") class NewFileHandler(FileSystemEventHandler): """Handles new file creation events.""" def on_created(self, event): if not event.is_directory: time.sleep(0.5) # small delay to ensure file is ready print(f"[📁] New file detected: {event.src_path}") upload_to_ftp(event.src_path) def main(): print(f"👀 Monitoring directory: {LOCAL_WATCH_DIR}") event_handler = NewFileHandler() observer = Observer() observer.schedule(event_handler, LOCAL_WATCH_DIR, recursive=True) observer.start() try: while True: time.sleep(1) except KeyboardInterrupt: observer.stop() print("\n🛑 Stopped watching.") observer.join() if __name__ == "__main__": main()
Gemini:
import time import os import ftplib import logging from watchdog.observers import Observer from watchdog.events import FileSystemEventHandler # --- Configuration --- # Set up logging for events and errors logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(message)s', datefmt='%Y-%m-%d %H:%M:%S') # The local directory to monitor recursively MONITORED_PATH = "/path/to/your/monitored/directory" # <<< CHANGE THIS # FTP Server details FTP_HOST = "your.ftp.server.com" # <<< CHANGE THIS FTP_USER = "your_ftp_username" # <<< CHANGE THIS FTP_PASS = "your_ftp_password" # <<< CHANGE THIS FTP_ROOT_DIR = "/remote/uploads/" # The base remote directory (must exist) # --------------------- class FTPUploadHandler(FileSystemEventHandler): """ Handles file system events (specifically file creation) and uploads new files to the FTP server. """ def __init__(self, local_root, ftp_host, ftp_user, ftp_pass, ftp_root): super().__init__() self.local_root = os.path.abspath(local_root) self.ftp_host = ftp_host self.ftp_user = ftp_user self.ftp_pass = ftp_pass self.ftp_root = ftp_root def _upload_file(self, src_path): """Attempts to connect, upload the file, and close the connection.""" if not os.path.isfile(src_path): return ftp = None try: # 1. Establish FTP Connection logging.info(f"Connecting to FTP: {self.ftp_host}") ftp = ftplib.FTP(self.ftp_host, timeout=30) ftp.login(self.ftp_user, self.ftp_pass) logging.info("FTP connection successful.") # 2. Determine Relative Path # Get the path relative to the monitored root (e.g., 'sub1/new_file.txt') relative_path = os.path.relpath(src_path, self.local_root) # 3. Determine Remote Path and ensure directories exist remote_filepath = os.path.join(self.ftp_root, relative_path).replace("\\", "/") remote_dir = os.path.dirname(remote_filepath) # Ensure the remote directory structure exists self._create_remote_directories(ftp, remote_dir) # 4. Upload the file logging.info(f"Uploading {src_path} to {remote_filepath}...") with open(src_path, 'rb') as fp: # Use storbinary for binary files (good practice for all files) ftp.storbinary(f'STOR {remote_filepath}', fp) logging.info(f"Successfully uploaded: {remote_filepath}") except ftplib.all_errors as e: logging.error(f"FTP Error during upload of {src_path}: {e}") except Exception as e: logging.error(f"An unexpected error occurred: {e}") finally: if ftp: try: ftp.quit() except Exception: pass # Ignore errors on quit def _create_remote_directories(self, ftp, remote_path): """Recursively creates remote directories if they do not exist.""" # Change to FTP_ROOT_DIR first ftp.cwd(self.ftp_root) # Split the remote path relative to the FTP_ROOT_DIR # We replace the backslash for compatibility with FTP paths path_components = remote_path.replace(self.ftp_root, "").strip("/").split('/') for part in path_components: if not part: continue try: # Try to change directory (if it exists) ftp.cwd(part) except ftplib.error_perm: # Directory does not exist, so create it ftp.mkd(part) ftp.cwd(part) # After creation, navigate back to the root if needed, but it's simpler # to just leave it pointing to the target directory for the STOR command. # However, for robustness, we return to the root to ensure consistent state. ftp.cwd(self.ftp_root) def on_created(self, event): """Called when a file or directory is created.""" if not event.is_directory: logging.info(f"New file detected: {event.src_path}") # Give the OS a moment to finish writing the file to prevent permission errors time.sleep(0.5) self._upload_file(event.src_path) # We ignore directory creation events, as we only care about files. def run_monitor(): """Sets up the observer and starts the file system monitoring loop.""" try: path = MONITORED_PATH event_handler = FTPUploadHandler( local_root=path, ftp_host=FTP_HOST, ftp_user=FTP_USER, ftp_pass=FTP_PASS, ftp_root=FTP_ROOT_DIR ) # Initialize the Observer observer = Observer() # Schedule the handler to monitor the path, set recursive=True to check subdirectories observer.schedule(event_handler, path, recursive=True) observer.start() logging.info(f"Directory monitor started on: {path}") logging.info("Press Ctrl+C to stop.") # Keep the thread alive while True: time.sleep(1) except FileNotFoundError: logging.error(f"The monitored path does not exist: {MONITORED_PATH}") except KeyboardInterrupt: logging.info("Monitor stopped by user.") except Exception as e: logging.critical(f"A critical error occurred: {e}") finally: if 'observer' in locals() and observer.is_alive(): observer.stop() observer.join() if __name__ == "__main__": run_monitor()
Copilot:
#!/usr/bin/env python3 """ Directory monitor that uploads new/changed files (including in subdirs) to an FTP server. Requires: pip install watchdog """ import os import time import threading import queue from pathlib import Path from ftplib import FTP, error_perm from watchdog.observers import Observer from watchdog.events import FileSystemEventHandler # ========== CONFIG ========== LOCAL_WATCH_DIR = "/path/to/watch" # local directory to monitor FTP_HOST = "ftp.example.com" FTP_PORT = 21 FTP_USER = "username" FTP_PASS = "password" REMOTE_BASE_DIR = "/remote/base/dir" # remote base directory where files will be placed # ============================ CHECK_STABLE_INTERVAL = 1.0 # seconds between size checks to ensure file is finished writing CHECK_STABLE_RETRIES = 3 class FTPUploader: def __init__(self, host, port, user, passwd, remote_base): self.host = host self.port = port self.user = user self.passwd = passwd self.remote_base = remote_base.rstrip('/') self.lock = threading.Lock() self.ftp = None self._connect() def _connect(self): with self.lock: try: if self.ftp: try: self.ftp.quit() except Exception: pass self.ftp = FTP() self.ftp.connect(self.host, self.port, timeout=30) self.ftp.login(self.user, self.passwd) # go to remote base dir (create if needed) self._ensure_remote_dirs(self.remote_base) self.ftp.cwd(self.remote_base) print("Connected to FTP:", self.host) except Exception as e: self.ftp = None print("FTP connect failed:", e) def _ensure_remote_dirs(self, remote_path): # Create nested directories on the FTP server if they don't exist if not remote_path or remote_path == "/": return parts = [p for p in remote_path.split('/') if p] try: # start from root self.ftp.cwd("/") except Exception: pass for part in parts: try: self.ftp.cwd(part) except error_perm: try: self.ftp.mkd(part) self.ftp.cwd(part) except Exception as e: raise # leave cwd to caller def upload(self, local_path, remote_rel_path): """ Upload a single file. remote_rel_path is the path relative to remote_base (use forward slashes). """ if not os.path.isfile(local_path): print("Skipping (not a file):", local_path) return False # ensure connected if not self.ftp: print("FTP not connected, attempting reconnect...") self._connect() if not self.ftp: print("Failed to connect to FTP; skipping upload:", local_path) return False with self.lock: try: # ensure directories exist on server remote_dirs = os.path.dirname(remote_rel_path).replace('\\', '/').strip('/') if remote_dirs: # navigate to base, then create nested dirs self.ftp.cwd(self.remote_base) for part in remote_dirs.split('/'): try: self.ftp.cwd(part) except error_perm: try: self.ftp.mkd(part) self.ftp.cwd(part) except Exception as e: print("Failed to create remote dir:", part, e) raise # upload file remote_name = os.path.basename(remote_rel_path) with open(local_path, 'rb') as f: print(f"Uploading {local_path} -> {self.remote_base}/{remote_rel_path}") self.ftp.storbinary(f"STOR {remote_name}", f) # return to remote base self.ftp.cwd(self.remote_base) return True except Exception as e: print("Upload failed:", e) # attempt reconnect for next time try: self.ftp.close() except Exception: pass self.ftp = None return False class UploadWorker(threading.Thread): def __init__(self, uploader, local_base, q): super().__init__(daemon=True) self.uploader = uploader self.local_base = os.path.abspath(local_base) self.queue = q self.pending = set() self.pending_lock = threading.Lock() self._stop = threading.Event() def enqueue(self, path): path = os.path.abspath(path) with self.pending_lock: if path in self.pending: return self.pending.add(path) self.queue.put(path) def run(self): while not self._stop.is_set(): try: path = self.queue.get(timeout=1) except queue.Empty: continue if path is None: break try: # wait until file size stabilizes if not self._wait_for_stable(path): print("File not stable or inaccessible, skipping:", path) continue rel_path = os.path.relpath(path, self.local_base).replace(os.sep, '/') self.uploader.upload(path, rel_path) finally: with self.pending_lock: self.pending.discard(path) self.queue.task_done() def _wait_for_stable(self, path): # ensure file exists and size doesn't change across retries try: last_size = os.path.getsize(path) except Exception: return False for _ in range(CHECK_STABLE_RETRIES): time.sleep(CHECK_STABLE_INTERVAL) try: size = os.path.getsize(path) except Exception: return False if size == last_size: return True last_size = size # last check try: return os.path.getsize(path) == last_size except Exception: return False def stop(self): self._stop.set() self.queue.put(None) class NewFileHandler(FileSystemEventHandler): def __init__(self, worker): super().__init__() self.worker = worker def on_created(self, event): if event.is_directory: return self.worker.enqueue(event.src_path) def on_modified(self, event): if event.is_directory: return self.worker.enqueue(event.src_path) def main(): local_dir = LOCAL_WATCH_DIR if not os.path.isdir(local_dir): print("Local watch directory does not exist:", local_dir) return q = queue.Queue() uploader = FTPUploader(FTP_HOST, FTP_PORT, FTP_USER, FTP_PASS, REMOTE_BASE_DIR) worker = UploadWorker(uploader, local_dir, q) worker.start() event_handler = NewFileHandler(worker) observer = Observer() observer.schedule(event_handler, local_dir, recursive=True) observer.start() print("Watching", local_dir, "and uploading to", FTP_HOST) try: while True: time.sleep(1) except KeyboardInterrupt: print("Stopping...") finally: observer.stop() observer.join() worker.stop() worker.join() if __name__ == "__main__": main()