in reply to pistolero

@p
"Postgres changes the binary format between major releases."

so what? every developer has to deal with this..write migration scripts and ship it with your upgrades..this is NOT a user problem.

"You're supposed to be conservative about migrating."

yeah..people stop fucking upgrading and run decades old software..really great strategie.

in reply to mk

@mk

> the exception doesn't make the rule.

It is *designed* for that use. If you're using it and you don't like how it's designed to be used, you probably shouldn't use it.

> i don't care about your enterprise problems. go fuck off and buy a oracle subscription.

freedos.org/ <img class=" title=":ddr_l:"> If you want to wade in the kiddie pool, here.

@mk
in reply to mk

@mk

> so what? every developer has to deal with this..write migration scripts and ship it with your upgrades..this is NOT a user problem.

No, no, the on-disk format of the data. You don't do a "migration script" for a filesystem, you release the new version of the filesystem and then everyone copies their shit over.

> yeah..people stop fucking upgrading and run decades old software..really great strategie.

Previous versions are still supported and get updates. This isn't a browser. What you are saying is bad. You don't auto-migrate a 10TB database.

Also decades-old software is bad exactly why?

@mk
in reply to pistolero

@p
"You don't do a "migration script" for a filesystem, you release the new version of the filesystem"

here's how i upgrade my filesystem:

```
$ zpool upgrade hetznerBackupPool
This system supports ZFS pool feature flags.

Enabled the following features on 'hetznerBackupPool':
zilsaxattr
head_errlog
blake3
block_cloning
vdev_zaps_v2
```

in reply to mk

You're not on a different version of ZFS. The version of ZFS hasn't changed in forever. You're adding new feature flags. Some of those flags will break compatibility when you upgrade the pool. That's why it's a manual process.

High availability is the default expectation for a database. Your local dev box is the exception. When you have large postgres servers (multiple TB), you need logical replications to your new replica set, promotion and failover. Postgres's core data model is not at all like a file system version feature flag update.

It's not some "enterprise" feature. I've done it on clusters as small as 4 nodes. It's what happens behind the scenes on those fancy "cloud" instances.

in reply to pistolero

"Your time doesn't matter."

this is what you're telling postgres users? majority small and medium sized companies..

---

Company Size Distribution

Small companies (<50 employees): 43%
Medium-sized companies: 42%
Large companies (>1000 employees): 15%

enlyft.com/tech/products/postg…

no private usage counted.

---

i bet that the majority of them don't have any of your high-availabilty enterprise problems and just want it to fucking work without extra support costs.

This entry was edited (6 months ago)
in reply to mk

@mk

> majority small and medium sized companies..

My entire career has been spent either working at or running that type of company.

> i bet that the majority of them don't have any of your high-availabilty enterprise problems

Literally every company I have ever worked at cares about this. Any hacker that gives a damn about his work cares about zero downtime. Maybe it doesn't matter in .de if your shit is broken but it sure as hell does here.

@mk
in reply to mk

@mk @pistolero @djsumdog I can understand you both. On one hand, software X can migrate/upgrade its file structure (e.g. "binary format") by itself by running automatically some migration/upgrade script. These scripts can be bound to version numbers (when the upgrade takes place) so the user=admin isn't bothered by it. The "in-place" migration also forces the upgrade on the user=admin.

But it seems like #PostgreSQL seem to follow an other philosophy that says that administrators are the responsible persons and it should be in their full control when the migration/upgrade should take place. So they don't want to force the upgrade on the admin's decision-making progress and let him decide whenever it takes place.

So what would be the solution? A simple script like pg_upgrade.sh that the administrator can manually run or for those people like @mk is that the installer asks "Should pg_upgrade.sh run for your automatically?" and it is done then during upgrade.

in reply to Roland Häder🇩🇪

@roland @mk It's not like the RDB or anything, semantics change, the APIs that extensions use change. There are plenty of community-developed scripts for simple upgrades of small clusters. FSE (not "enterprise") is 611GB on disk. When I did the upgrade to 16.2, it took a while: these upgrades have to be done deliberately. It's not quite as bad as Python 2 to Python 3 but it's closer to that than updating to a new version of some random application. The closer you get to the base levels of the system, the more careful you have to be, and for most uses of a database (the uses for which Postgres is designed), data loss and downtime are unacceptable, some query that was fast suddenly gets slow, you can't have that. And how do you even do that across distros? Maybe his distro should provide that script; Debian does if I recall correctly, but he's shoving everything into Docker containers, and Docker containers tend to offload the heavy storage stuff until they can't.
in reply to Roland Häder🇩🇪

@roland @mk @RedTechEngineer Well, Python 3 was a big change. All of my Ruby 1.8(!!) code still works in 3.4, most PHP stuff is the same, but in Python's case, it's got less to do with code, it's a coordination effort: lift the world without moving it.

Jython, for example, is still only compatible with Python 2.7. People that use Jython still have to write 2.7 code: if you want your Python code to talk to Scala, you have to write code that works on Python 2. So you write a library that bridges some code written in Java with some code written in Python and then someone else's Scala depends on it, you can't just wave a wand.

in reply to Roland Häder🇩🇪

@roland @mk @RedTechEngineer Unlikely; if it was 1:1 then it wouldn't require porting to begin with. You could maybe have a bot attempt it, but the bot would have to decipher what the original author *meant* to do rather than what they actually did.

For example, Python 3 changed how strings were handled. Probably in most cases this is trivial, but sometimes when reading the code, a machine wouldn't be able to tell if it was intended to handle ASCII or UTF-8 or binary data. And that's one semantic change: a lot of libraries went away, so how do you automate that? Just dump the library code into the middle of the file, or select a new API? Can you automate the process of translating a web2py application into a Django one? A lot of these things you can't really automate: you need to make decisions about what to do, which requires you to understand why the code exists.

in reply to pistolero

Yea, I was guessing you were talking about old simple, one task scripts. I did have to port some Python2 to 3 on a few projects. Some of it wasn't that bad, but there were some things that took a while.

I commend the efforts of the BeautifulSoup4 devs who had to deal with the entire SGML parser being removed.

The 90% number was hyperbolic. It does spit out what it can't auto fix, and I just hoped the process had gotten better now that most libraries/frameworks have been ported over. I haven't had to do any py2->3 porting in years either. 😋

in reply to djsumdog

@roland @RedTechEngineer @mk

> Yea, I was guessing you were talking about old simple, one task scripts.

I used to write those in Ruby unless I had a reason; nowadays they're in awk unless I have a reason (e.g., JSON). So I can sit down at a terminal in 1985 and be fine, ha.

> The 90% number was hyperbolic. It does spit out what it can't auto fix,

Yeah, I think, like...half the time it's easier to just put the old code on the left, an empty buffer on the right, and do a rewrite.

⇧