Concerning mathematical placeholders versus reality, I think the history probably runs through continuum mechanics, which includes fluid dynamics but also the theory of deformable solids. This was and is taught in terms of "infinitesimal pieces of matter" which aren't supposed to be real. One simply thinks of continuous stuff as a dense pack of tiny cubes, whose size is "if they're not small enough, make them smaller". We still teach that way today, with the added reverse condition that the tiny cubes nonetheless be big enough to include very many molecules, and if they're not big enough, make them bigger. Molecules are actually so small that there is plenty of room for a wide range of cube sizes between small enough and big enough. Anywhere from nanometers to millimetres generally does it. The little cubes are entirely fictitious, and are explicitly not supposed to represent any real structure in matter. It's a constraint on the validity of the theory, that you have to be able to make the cubes bigger or smaller and not change any results. They are only a way of thinking about matter.
Concerning Hossenfelder's views about math versus reality: she's a high-energy theorist, so she knows that the theory has been through a series of mathematical incarnations. In physics education, at least, ontogeny usually recapitulates phylogeny. We learn theories in historical sequence, not out of any real devotion to history, but because an appropriately sanitised history provides a gentle uphill slope through the concepts. This teaching tradition distorts history to serve current theory and also distorts current theory by framing it with past misconceptions, but it's a lot easier to take over that ready-made syllabus than to think out a new teaching strategy for oneself.
Hossenfelder's point about math being mistaken for reality in the thin-air upper reaches of high-energy theory is arguable but not trivial. She's written a controversial book about it, though I haven't read it. I have my own views, having come out of high energy a couple of decades ago. There are things to be said for mathematical elegance as a guide to truth, but the principle has been pushed too far for too long. Dirac misled people with his dictum that it is more important to have beauty in your equations than to have them fit the experiment. He could say that, because of the unbelievable way he cooked up the Dirac equation and predicted the positron, but he was incredibly lucky. It doesn't usually work that well.
My concern about the methodology of high-energy theory in the past half-century is not so much that math alone is unreliable as that it is unimaginative. How can it be unimaginative, when it is only limited by the human imagination? That's exactly how. People aren't creative enough to make big discoveries by mathematical speculation. We need hints from observation and experiment. Their absence is why high-energy theory has been boring as well as barren for so long. The impression one gets from popular media that high-energy theory is constantly effervescing with radical new ideas is completely bogus. It's only effervescing with the desire to project that image. All of its radical new ideas have been kicking around for decades.
Concerning Hossenfelder on Popper, are you perhaps misreading her "testable" as "verifiable"? For whatever reason, Popper's idea that testing is about falsifying rather than verifying is pretty well known in physics. "Falsify" sounds weird, so people don't often say it, but it's definitely not right to imagine that when people talk about testing theories they are assuming that testing verifies or justifies.
Hossenfelder wrote:[Popper] said if it isn’t testable, it isn’t science. Not ‘if it’s testable, then it's science.’
Karl Popper wrote:I shall certainly admit a system as empirical or scientific only if it is capable of being tested by experience.
Popper doesn't say, "if", here. He says, "only if". This seems to be identical to Hossenfelder's summary.
In general I think one can interpret scientific reasoning too narrowly. It seems to me that it can quite easily be both Popperian and inductive at once, in different ways. A paper can, for instance, inductively determine the size that molecules must have, if they are real. The inference from all the diverse data of the common molecular size, as a theoretical parameter, is the same inductive process, regardless of whether or not one is reifying molecules. Finding that molecular sizes had to be wildly different to explain different experiments would have been a Popperian falsification of molecular reality. Finding that similar molecular sizes suffice to explain many experiments doesn't have to confirm or verify that molecules are really there. If one notes that divergent sizes would have disproven real molecules, but admits that convergent sizes don't prove them, then one has written a Popperian paper about molecular reality that is inductive about the theoretical parameter of molecular size. I submit that this is a typical situation for a scientific paper.