A lot are still doing that and haven’t moved up
(Please at least use SFTP!)
I remember joining the industry and switching our company over to full Continuous Integration and Deployment. Instead of uploading DLL’s directly to prod via FTP, we could verify each build, deploy to each environment, run some service tests to see if pages were loading, all the way up to prod - with rollback. I showed my manager, and he shrugged. He didn’t see the benefit of this happening when, in his eyes, all he needed to do was drag and drop, and load the page to make sure all is fine.
Unsurprisingly, I found out that this is how he builds websites to this day…
True story, bruh.
This is from before my times, but… Deploying an app by uploading a pre built bundle? If it’s a fully self-contained package, that seems good to me, perhaps better than many websites today…
That’s one nice thing about Java. You can bundle the entire app in one .jar or .war file (a .war is essentially the same as a .jar but it’s designed to run within a Servlet container like Tomcat).
PHP also became popular in the PHP 4.x era because it had a large standard library (you could easily create a PHP site with no third-party libraries), and deployment was simply copying the files to the server. No build step needed. Classic ASP was popular before it, and also had no build step. but it had a very small standard library and relied heavily on COM components which had to be manually installed on the server.
PHP is mostly the same today, but these days it’s JIT compiled so it’s faster than the PHP of the past, which was interpreted.
I mean, a lot of docker files out there with
COPY . .
True, but building the image is not the same as deploying to production.
Fair point
There’s still a few sites I deploy changes to using ssh+rsync. …which is made considerably easier by the fact that it’s just a static website generated with Jekyll.
this app uses java swing?
This is how I deployed an app less than 5 years ago (healthcare).
It’s sad
I know a place where they still do this. They’ve got an 8-digit user count, 7 digit monthly profits, all running on one server that costs something like $20 a month. They’ve downsized a few years ago to single-digit employee number and just sit there and collect profits. And this is why I’m now working for a company that casually dropped a few grand for a glorified CPU usage meter and a few grand on top of that for deployment tool that does the same thing that the old guy at a former place was doing with his trusty FTP client.
This is how I deploy my personal website today. The holster doesn’t give ash access.
FTP Explorer all the way! Preferred that to filezilla… I mean it didn’t support sftp but I liked it.
You will pry ftp from my cold dead hands.
Can you use sftp instead? Pwease? 🥺
Who can afford the performance hit?
My production server running Win XP Home has to have the firewall off just to make all the super secret company internal networks work. SFTP would cripple us!
/s, except about the performance hit being stupidly unacceptable in 2024.
Somehow I miss those days. Now you need weeks of training to understand the black magic behind all the build/deployment stuff in whatever cloud provider your company decided to use…
We got our own platform based on kubernetes and cncf stuff and we don’t have to care anymore about the metal underneath. AWS? OTC? Azure? Thats just a target parameter, platform does the rest. It’s great.
How often do you switch cloud providers that this is even a real rather than a hypothetical benefit? (Compared to the cost of dealing with a much more complicated stack.)
I manage a stack like this, we have dedicated hardware running a steady state of backend processing, but scale into AWS if there’s a surge in realtime processing needed and we don’t have the hardware. We also had an outage in our on prem datacenter once which was expensive for us (I assume an insurance claim was made), but scaling to AWS was almost automatic, and the impact was minimal for a full datacenter outage.
If we wanted to optimize even more, I’m sure we could scale into Azure depending on server costs when spot pricing is higher in AWS. The moral of the story is to not get too locked into any one provider and utilize some of the abstraction layers so that AWS, Azure, etc are just targets that you can shop around for by default, without having to scramble.
It’s not about switching, it’s about hosting our services on different platforms at the same time.
Naa, once you figure out one the rest click usually.
FTP and rsync my beloved
okay, but why did you use a password when the ssh/sftp key is right next to the files
rsync
gang when?The year Linux takes over the desktops!
I fell like the reason nobody uses FileZila and etc anymore is because everybody that wanted it migrated to Linux already. So seriously, it already happened.
People don’t use FileZilla for server management anymore? I feel like I’ve missed that memo.
I suppose in the days of ‘Cloud Hosting’ a lot of people (hopefully) don’t just randomly upload new files (manually) on a server anymore.
Even if you still just use normal servers that behave like this, a better practice would be to have a build server that creates builds, like whenever you check code into the Main branch, it’ll create a deploy for the server, and you deploy it from there - instead of compiling locally, opening filezilla and doing an upload.
If you’re using ‘Cloud Hosting’ - for example AWS - If you use VMs or bare metal - you’d maybe create Elastic Beanstalk images and upload a new Application or Machine Image as a new version, and deploy that in a more managed way. Or if you’re using Docker, you just upload a new Docker image into a Docker registry and deploy those.
For some of my sites, I still build on my PC and rsync the build directory across. I’ve been meaning to set up Gitlab or something similar and configure automated deployments.
Yea, I wasn’t saying it’s always bad in every scenario - but we used to have this kinda deployment in a professional company. It’s pretty bad if this is still how you’re doing it like this in an enterprise scenarios.
But for a personal project, it’s alrightish. But yea, there are easier setups. For example configuring an automated deployed from Github/Gitlab. You can check out other peoples’ deployment config, since all that stuff is part of the repos, in the
.github
folder. So probably all you have to do is find a project that’s similar to yours, like “static file upload for an sftp” - and copypaste the script to your own repo.(for example: a script that publishes a website to github pages)
This is what I do because my sites aren’t complicated enough to warrant a build system. Personally I think most websites out there are over-engineered. Example: a Discord friend made a React site that displays stats from a gaming server. It looks nice, but you literally can’t hyperlink to any of the data, it can only be loaded dynamically and only looks coherent on a phone in portrait mode. There are a lot of people following trends (some good trends) but without really thinking about why.
I’m starting to like the htmx model a lot. Server-rendered app that uses HTML attributes to configure the dynamic bits (e.g. which URL to hit and which DOM element to insert the response into). Don’t have to write much JS (or any in some cases).
you literally can’t hyperlink to any of the data
I thought most React-powered frameworks use a URL router out-of-the-box these days? The developer does need to have a rough idea what they’re doing, though.
They have bundled malware from the main downloads on their own site multiple times over the years, and even denied it and tried gaslighting people that AVs were giving false positives because AV companies are paid off by other corporations. And the admin will even try to delete the threads about this stuff but web archive to the rescue…
You know what? I didn’t believe you, since I’m using it for a long time on Linux and never had any issues with it. Today, when I helped a friend (on Windows) with some SFTP transfer and recommended FileZilla was the first time I realised the official Downloads page provides Adware. The executable even gets flagged by Microsoft Defender and VirusTotal. That’s actually REALLY bad. Isn’t FileZilla operated by Mozilla? Should I stop using it, even though the Linux versions don’t have sketchy stuff? It definitely leaves a really bad taste.
Yeah, it’s bad. Surprised they’re still serving that crap in their own bundle but i guess some things don’t change.
Filezilla is no relation to mozilla. But yeah i moved away from it years ago. The general recommendation I’ve seen is “anything but filezilla”. Personally i use winscp for windows, and will have to figure out what to use when i switch my daily driver to Linux.