litchralee
If you were to properly consider the problem the actual cost would be determined by cost per distance traveled and you essentially decide the distance by which ever you are budgeted for.
I wrote my comment in response to the question, and IMO, I did it justice by listing the various considerations that would arise, in the order which seemed most logical to me. At no point did I believe I was writing a design manual for how to approach such a project.
There are much smarter people than me with far more sector-specific knowledge to “properly consider the problem” but if you expected a feasibility study from me, then I’m sorry to disappoint. My answer, quite frankly, barely arises to a back-of-the-envelope level, the sort of answer that I could give if asked the same question in an elevator car.
I never specified that California would be the best place to implement this process.
While the word California didn’t show up in the question, it’s hard to imagine a “state on the coast” with “excess solar” where desalination would be remotely beneficial. 30 US States have coastlines, but the Great Lakes region and the Eastern Seaboard are already humid and wet, with rivers and tributaries that aren’t exactly in a drought condition. That leaves the three West Coast states, but Oregon and Washington are fairly well-supplied with water in the PNW. That kinda leaves California, unless we’re talking about Mexican states.
I’m not dissing on the concept of desalination. But the literature for existing desalination plant around the world showcases the numerous challenges beyond just the money. Places like Israel and Saudi Arabia have desalination plants out of necessity, but the operational difficulties are substantial. Regular clogging of inlet pipes by sealife is a regular occurrence, disposal of the brine/salt extracted is ecologically tricky, energy costs, and more. And then to throw pumped hydro into this project would make it a substantial undertaking, as dams of any significant volume are always serious endeavors.
At this point, I feel the question is approaching pie-in-the-sky levels of applicability, so I’m not sure what else I can say.
I’m not a water or energy expert, but I have occasionally paid attention to the California ISO’s insightful – while perhaps somewhat dry – blog. This is the grid operator that coined the term “duck curve” to describe the abundance of solar energy available on the grid during the daylight hours, above what energy is being demanded during those hours.
So yes, there is indeed an abundance of solar power during the daytime, for much of the year in California. But the question then moves to: where is this power available?
For reference, the California ISO manages the state-wide grid, but not all of California is tied to the grid. Some regions like the Sacramento and Los Angeles areas have their own systems which are tied in, but those interconnections are not sufficient to import all the necessary electricity into those regions; local generation is still required.
To access the bulk of this abundant power would likely require high-voltage transmission lines, which PG&E (the state’s largest generator and transmission operator) operates, as well as some other lines owned by other entities. By and large, building a new line is a 10+ year endeavor, but plenty of these lines meet up at strategic locations around the state, especially near major energy markets (SF Bay, LA, San Diego) and major energy consumers (San Joaquin River Delta pumping station, the pumping station near the Grapevine south of Bakersfield).
But water desalination isn’t just a regular energy consumer. A desalination plant requires access to salt water and to a freshwater river or basin to discharge. That drastically limits options to coastal locations, or long-distance piping of salt water to the plant.
The latter is difficult because of the corrosion that salt water causes; it would be nearly unsustainable to maintain a pipe for distances beyond maybe 100 km, and that’s pushing it. The coastal option would require land – which is expensive – and has implications for just being near the sea. But setting aside the regulatory/zoning issues, we still have another problem: how to pump water upstream.
Necessarily, the sea is where freshwater rivers drain to. So a desalination plant by the ocean would have to send freshwater back up stream. This would increase the energy costs from exorbitant to astronomical, and at that point, we could have found a different use for the excess solar, like storing it in hydrogen or batteries for later consumption.
But as a last thought experiment, suppose we put the plant right in the middle of the San Joaquin River Delta, where the SF Bay’s salt water meets the Sacramento River’s freshwater. This area is already water-depreased, due to diversions of water to agriculture, leading to the endangerment of federally protected species. Pumping freshwater into here could raise the supply, but that water might be too clean: marine life requires the right mix of water to minerals, and desalinated water doesn’t tend to have the latter.
So it would still be a bad option there, even though power, salt water, and freshwater access are present. Anywhere else in the state is missing at least one of those three criteria.
For example, with all things being equal, you can very easily see if a certain wheel is creating more resistance over another.
But this product cannot compute drag figures for the bike. Its theory of operation limits it to compute only the drag upon the rider. Also, to keep things simple in my original answer, I didn’t touch upon the complex bike+rider aerodynamic interactions, such as when turbulent air off the bike is actually alleviated by the presence of the rider, but thus moves a net-smaller drag from the bike onto the rider. Optimizing for lowest rider drag could end up increasing the bike’s drag, inadvertently increasing overall drag.
But I think the real issue is the “all else being equal” part. If a team is trying to test optimal rider positions, then the only sensible way to test that in-field is to do A/B testing and hope for similar conditions. If the conditions aren’t similar enough, the only option is more runs. All to answer something which putting the rider+bike into a wind tunnel would have quickly answered. Guess-and-check is not a time-efficient solution for finding improvements.
Do I think all bike racing teams need a 24/7 wind tunnel? No, definitely not. For reference, the Wright Brothers built their own small wind tunnel to do small-scale testing, so it’s not like racing teams are out of options between this product and a full-blown (pun intended) wind tunnel. And of course, in the 21st Century, we have a rich library of shared aerodynamic research on racing bikes to lean on, plus fluid modeling software.
My initial reaction was “this cannot work”. So I looked at their website, which is mostly puffery and other flowery language. But to their credit, they’ve got two studies, err papers, err preprints, uh PDFs, one of which describes their validation of their product against wind tunnel results.
In brief, the theory of operation is that there’s a force sensor at each part where the rider meets the bike: handlebars, saddle, and pedals. Because Newton’s Third Law of Motion requires that aerodynamic forces on the rider must be fully transfered to the bike – or else the rider is separating from the bike – the forces on these sensors will total to the overall aerodynamic forces acting on the rider.
From a theoretical perspective, this is actually sound, and would detect aero forces from any direction, regardless of if it’s caused by clothes (eg a hoodie flailing in the air) or a cross-wind. It does require an assumption that the rider not contact any other parts of the bike, which is reasonable for racing bikes.
But the practical issue is that while aero forces are totalized with this method, it provides zero insight into where the forces are being generated from. This makes it hard to determine what rider position will optimize airflow for a given condition. To make aero improvements like this becomes a game of guess-and-check. Whereas in a wind tunnel, identifying zones of turbulent air is fairly easy, using – among other things – smoke to see how the air travels around the rider. The magnitude of the turbulent regions can then be quantified individually, which helps paint a picture of where improvements can be made.
For that reason alone, this is not at all a “wind tunnel killer”. It can certainly still find use, since it yields in-field measurements that can complement laboratory data. Though I’m skeptical about how a rider would even respond if given real-time info about their body’s current aerodynamic drag. Should they start tacking side to side? Tuck further in?
More data can be useful, but one of the unfortunate trends from the Big Data explosion is the assumption that more data is always useful. If that were true, everyone would always be advised to undergo every preventative medical diagnostics annually, irrespective of risk. Whereas the current reality is that overdiagnosis is a real problem now precisely because some doctors and patients are caught in that false assumption.
My conclusion: technically feasible but seems gimmicky.
“Not everybody can use a bike to get around — these are some of our major arterial roads, whether it is Bloor, University or Yonge Street — people need to get to and from work,” Sarkaria said.
This is some exasperatingly bad logic from the provincial Transport Minister. The idea that biking should be disqualified because the infrastructure cannot magically enable every single person to start biking is nonsense. By the same “logic”, the provincial freeways should be closed down because not everyone can drive a car. And then there’s some drivel about bike lanes contributing to gridlock, which is nonsense in the original meaning and disproven in the colloquial meaning.
It is beyond the pale that provincial policy will impose a ceiling on what a municipality can do with its locally-managed roads. At least here in America, a US State would impose only a floor and cities would build up from there. Such minimums include things like driving on the right and how speed limits are computed. But if a USA city or county aspires for greatness, there is no general rule against upgrading a road to an expressway, or closing a downtown street to become fully pedestrianized.
How can it be that Ontario policy will slide further backwards than that of US States?
My recommendation is to start with getting fax to work locally. As in, from port 1 of a single SPA2102 to port 2 of the same. This would validate that your fax machines and the SPA2102 is operational, and is just entertaining in its own right to have a dialtone that “calls” the other port.
Fortunately, Gravis from the Cathode Ray Dude YouTube channel has a writeup to do exactly that, and I’ve personally followed these steps on an SPA122 with success, although I was doing a silly phone project, not fax project. https://gekk.info/articles/ata-config.html
If you’re lucky, perhaps fax will Just Work because your machines are very permissive with the signals they receive and can negotiate. If not, you might have to adjust the “fax optimizations” discussed here: https://gekk.info/articles/ata-dialup.html
And then once local faxing works, you can then try connecting two VoIP devices together over the network. This can be as simple as direct SIP using IP and port number, or can involve setting up a PBX that both devices register against.
On one hand, I’m pleased that C++ is answering the call for what I’ll call “safety as default”, since as The Register and everyone else since pointed out, if safety constructs are “bolted on” like an afterthought, then of course it’s not going to have very high adoption. Contrast this to Rust and its “unsafe” keyword that marks all the places where the minimum safety of the language might not hold.
On the other hand, while this Safe C++ proposal adopts a similar notion of an “unsafe” context, it also adds a “safe” keyword, to specify that a function will conform to compile-time safety checks. But as the proposal readily admits:
Rust’s functions are safe by default. C++’s are unsafe by default.
While the proposal will surely continue to evolve before being implemented, I forsee a similar situation as in C where code that lacked initial const-correctness will struggle to work with newer code and libraries. In this case, it would be the “unsafe” keyword that proliferates everywhere just to call older, unsafe code from newer, safe callers.
Rust has the advantage that there isn’t much/any legacy Rust to upkeep, and that means the volume of unsafe code in Rust proframs is minimal, making them safer overall today. But for Safe C++ code, there’s going to be a lot of unsafe legacy C++ code and that reduces the safety benefit for programs overall, for the time being
Even as this proposal progresses, the question of whether to start rewriting some code anew in Rust remains relevant. But this is still exciting as a new option to raise the bar in memory safety in C++.
My literacy of the German language is almost nil, but it seems patently unreasonable for an author or journalist to believe that over half of the incidents involving a fairly common activity would be fatal. Now, I should say that I’m basing this on prior knowledge of the German e-bike/pedelec market, where over half the bikes sold there at electric. What this implies is that of the sizable population of the country, of the subset which are riding bicycles, and further the subset which ride pedelecs, and still yet the subset which get into a collision or other incident, that somehow it’s believable that over half will die?
That cannot possibly be true, does not pass the sniff test, and isn’t even passable as a joke. If it were true, there would be scores of dead riders left and right, in every city in the country, daily. I suspect it would overtake (pun intended) the number of murders in the fairly safe country.
Compare this with parachuting, which would be more sensible for a headline of “most accidents are fatal”, I’m shocked that no one in the publication chain of command noticed such a gross error. While it’s true that some statistics are bona fide shocking – American shooting deaths come to mind – this is a very bizarre instance of confirmation bias, since no one noticed the error.
I was led to believe that cycling in German is “normalized but marginalized”, but this type of error speaks to some journalistic malpractice.
This does not agree with what the Social Security Administration has published:
Q20: Are Social Security numbers reused after a person dies?
A: No. We do not reassign a Social Security number (SSN) after the number holder’s death. Even though we have issued over 453 million SSNs so far, and we assign about 5 and one-half million new numbers a year, the current numbering system will provide us with enough new numbers for several generations into the future with no changes in the numbering system.