Tuesday, December 18, 2018

US Air Force set to launch 1st next-generation GPS satellite


A look at what it takes to design, build, test and launch
one of Lockheed Martin’s next-generation GPS III satellites for the U.S. Air Force.
From establishing the modern economy to bringing you home safely, Global Positioning System (GPS) is a key component to our everyday lives.
More than four billion military, commercial and civil users worldwide connect with GPS’ valuable positioning, navigation and timing (PNT) signals.
And Lockheed Martin’s advanced, new GPS III is ready to launch the next generation of connection.
GPS III satellites are more powerful, incredibly resilient, incorporate an advanced civilian user signal (L1C), provide three times the accuracy than previous GPS satellites and are designed to evolve and incorporate new technology as it develops.
Launch of the first GPS III satellite is scheduled for December 18 and our phones will receive an upgraded GPS signal from this satellite by the end of 2019. 

From AirForceTimes by Dan Elliott

After months of delays, the U.S. Air Force is about to launch the first of a new generation of GPS satellites, designed to be more accurate, secure and versatile.

But some of their most highly touted features will not be fully available until 2022 or later because of problems in a companion program to develop a new ground control system for the satellites, government auditors said.


LIVE : SpaceX's GPS mission: Don't miss last Florida rocket launch of the year 
SpaceX is targeting Tuesday, December 18 for launch of the United States Air Force’s first Global Positioning System III space vehicle (SV) from Space Launch Complex 40 (SLC-40) at Cape Canaveral Air Force Station, Florida.
The 26-minute launch window opens at 9:11 a.m. EST, or 14:11 UTC.
The satellite will be deployed to medium Earth orbit approximately 1 hour and 56 minutes after liftoff. A 26-minute backup launch window opens on Wednesday, December 19 at 9:07 a.m. EST, or 14:07 UTC.
Note: The Youtube event start time reflects the estimated liftoff time for this mission. SpaceX's live webcast will begin about 15 minutes before liftoff.

The satellite is scheduled to lift off Tuesday from Cape Canaveral, Florida, aboard a SpaceX Falcon 9 rocket. It’s the first of 32 planned GPS III satellites that will replace older ones now in orbit. Lockheed Martin is building the new satellites outside Denver.

After months of delays, the U.S. Air Force is about to launch the first of a new generation of GPS satellites, designed to be more accurate, secure and versatile.

GPS is best-known for its widespread civilian applications, from navigation to time-stamping bank transactions.
The Air Force estimates that 4 billion people worldwide use the system.

But it was developed by the U.S. military, which still designs, launches and operates the system.
The Air Force controls a constellation of 31 GPS satellites from a high-security complex at Schriever Air Force Base outside Colorado Springs.

Compared with their predecessors, GPS III satellites will have a stronger military signal that's harder to jam — an improvement that became more urgent after Norway accused Russia of disrupting GPS signals during a NATO military exercise this fall.

GPS III also will provide a new civilian signal compatible with other countries' navigation satellites, such as the European Union's Galileo system.
That means civilian receivers capable of receiving the new signal will have more satellites to lock in on, improving accuracy.
"If your phone is looking for satellites, the more it can see, the more it can know where it is," said Chip Eschenfelder, a Lockheed Martin spokesman.

It is a part of our lives every day and over a billion people depend on this technology.
GPS, unlike any other military program, has expanded into the commercial market, becoming a mainstay of everyday life for people around the world.
See how Lockheed Martin continues to improve this U.S. Air Force provided asset with GPS III.
With satellites 1-8 already in production, this entirely new satellite design will take positioning, navigation and timing to the highest level and will continue to evolve into the future.
Simply put, GPS III is the most powerful and capable GPS satellite ever built – and it is already here

The new satellites are expected to provide location information that's three times more accurate than the current satellites.
Current civilian GPS receivers are accurate to within 10 to 33 feet (3 to 10 meters), depending on conditions, said Glen Gibbons, the founder and former editor of Inside GNSS, a website and magazine that tracks global navigation satellite systems.
With the new satellites, civilian receivers could be accurate to within 3 to 10 feet (1 to 3 meters) under good conditions, and military receivers could be a little closer, he said.

Only some aspects of the stronger, jamming-resistant military signal will be available until a new and complex ground control system is available, and that is not expected until 2022 or 2023, said Cristina Chaplain, who tracks GPS and other programs for the Government Accountability Office.
Chaplain said the new civilian frequency won't be available at all until the new control system is ready.

The price of the first 10 satellites is estimated at $577 million each, up about 6 percent from the original 2008 estimate when adjusted for inflation, Chaplain said.
The Air Force said in September it expects the remaining 22 satellites to cost $7.2 billion, but the GAO estimated the cost at $12 billion.

The first GPS III satellite was declared ready nearly 2½ years behind schedule.
The problems included delays in the delivery of key components, retesting of other components and a decision by the Air Force to use a Falcon 9 rocket for the first time for a GPS launch, Chaplain said. That required extra time to certify the Falcon 9 for a GPS mission.

The new ground control system, called OCX, is in worse shape.
OCX, which is being developed by Raytheon, is at least four years behind schedule and is expected to cost $2.5 billion more than the original $3.7 billion, Chaplain said.
The Defense Department has struggled with making sure OCX meets cybersecurity standards, she said.
A Pentagon review said both the government and Raytheon performed poorly on the program.
Raytheon has overcome the cybersecurity problems, and the program has been on budget and on schedule for more than a year, said Bill Sullivan, a Raytheon vice president in the OCX system.
Sullivan said the company is on track to deliver the system to the Air Force in June 2021, ahead of GAO's estimates.
The Air Force has developed work-arounds so it can launch and use GPS III satellites until OCX is ready to go.

While the first GPS III waits for liftoff in Florida, the second is complete and ready to be transported to Cape Canaveral.
It sits in a cavernous "clean room" at a Lockheed Martin complex in the Rocky Mountain foothills south of Denver.

It's expected to launch next summer, although the exact date hasn't been announced, said Jonathon Caldwell, vice president of Lockheed Martin's GPS program.
Six other GPS satellites are under construction in the clean room, which is carefully protected against dust and other foreign particles.
"It's the highest-volume production line in space," Caldwell said.

For the first time, the Air Force is assigning nicknames to the GPS III satellites.
The first one is Vespucci, after Amerigo Vespucci, the Italian navigator whose name was adopted by early mapmakers for the continents of the Western Hemisphere.

Monday, December 17, 2018

Norwegian frigate sinking has far-reaching implications


From The Aspistrategic by Sam Bateman

In an incident that has attracted relatively little media attention in Australia, the modern 5,300-ton Norwegian frigate KNM Helge Ingstad sank in a Norwegian fjord after a collision with the large Maltese-registered oil tanker Sola TS.


 Helge Ingstad position with the GeoGarage platform (NHS nautical chart)

 position with Google Earth

It’s now clear what happened.
In the early hours of 8 November, the Ingstad was proceeding at 17 knots along the Hjeltefjorden near the Sture oil terminal.
The Sola TS had just left the terminal fully laden and was proceeding at 7 knots.
The watch on the Ingstad,which had just changed, thought that the deck lights of the tanker were part of the well-lit terminal.

KNM Helge Ingstad sunk
Gulliver Floating Sheerlegs and Crane Barges/Crane Pontoons 

The Sola TS became concerned about the situation.
However, because the Ingstad wasn’t showing automatic identification system (AIS) data, initially neither the Sola TS nor the traffic station on shore could identify the frigate to warn it of the imminent danger.
Repeated warnings to the Ingstad after it had been identified failed to get it to alter course until just seconds before the collision.
The heavily laden tanker couldn’t manoeuvre out of the way.


The Ingstad suffered extensive hull damage along the starboard side, lost propulsion and steering control, and experienced flooding in three compartments, before running aground and later sinking.
Eight crew members were injured.

Aerial view, see film

Commissioned in 2009 and built by the Spanish shipbuilder Navantia, the Helge Ingstadwas the fourth of the Fridtjof Nansen class of frigates in the Royal Norwegian Navy.
Australia’s Hobart-class air warfare destroyers are of a broadly similar Navantia design.

Navantia has produced several designs similar to the Nansen class, including under the trilateral frigate agreement set up by the Netherlands, Germany and Spain.
Through this agreement, the F100 class of frigates is being built in Spain by Navantia, and the Dutch De Zeven Provincien class and the German F124 Sachsen class are being built by other companies.

A preliminary investigation by Norwegian authorities found that confusion on the Ingstad’sbridge was the immediate cause of the collision, but that the ship sank because of progressive flooding.
After the collision, water quickly moved through several watertight compartments, apparently via the ship’s propeller shafts, which pass through the bulkheads between the compartments through theoretically watertight openings (known as stuffing tubes or stuffing boxes) that should prevent progressive flooding.

Not only did the sinking of the uninsured frigate cost the Norwegian Navy its entire annual budget, but the Scandinavian country also lost millions of additional kronor, as several oil and gas fields were temporarily out of order due to the accident, which experts find inexplicable.

Based on crew interviews, authorities determined that the stuffing boxes weren’t working properly, jeopardising the watertightness of the ship.
The investigation report warned that the faults that sunk the Ingstad could also be in other Navantia ships, raising questions about a possible problem with the design.

The Ingstad accident has eerie similarities to the serious collisions suffered by US Navy destroyers during a horror year in 2017.
The Ingstad was proceeding at excessive speed in a busy shipping area and wasn’t showing AIS information, and the team on her bridge clearly lost situational awareness and failed to appreciate the serious situation that was developing.

There are lessons here for navies around the world.
First, for questionable operational security reasons, warships often don’t show AIS data, even though it’s a vital collision-avoidance mechanism that’s used extensively by the commercial shipping sector.
Not using AIS may be acceptable on the open ocean, but it’s poor practice in busy shipping lanes.

Genscape Vesseltracker's animated AIS replay shows the collision between oil tanker "Sola TS" and a Norwegian frigate "KNM Helge Ingstad" off the west coast of Norway on November 8, 2018.
Wrecked frigate's crew thought oncoming tanker was fixed object

After the US Navy accidents, the chief of naval operations instructed his ships to show AIS when they’re in heavy shipping traffic.
This was apparently a message that had not got through to the Royal Norwegian Navy, although it’s been reported that an American naval exchange officer was onboard the Ingstad at the time of the collision.

Radio and radar logs from the collision between the Norwegian frigate HNoMS Helge Ingstad, Nato designation F313, and the tanker Sola TS in Hjeltefjorden, north of Bergen, on November 8, 2018. see Medium transcription

Second, the high-tech bridge of a modern warship isn’t amenable to using the most basic sensory mechanism of all—what is often referred to as either the ‘seaman’s eye’ or the ‘Mark One eyeball’.
The many screens and electronic data systems on a bridge can preoccupy the bridge team and distract them from what is happening around them.

An accident such as that suffered by the Ingstad can have many causes, the sum of which leads to the collision.
In addition to the ones already mentioned, two other factors contributed to the incident.
First, the collision occurred soon after the watch had changed on the bridge, and the incoming watch may not have gained a proper perspective of the situation that was emerging.

Location of the collision between HMoS Helge Ingstad and tanker Sola TS

Second, the tanker was extensively lit up by deck lights that may have obscured the navigational lights, leading the incoming watch to believe that the Sola TS’s lights were part of the terminal.
A fully professional bridge team, however, should have observed that the tanker was both underway at seven knots and showing AIS.

The incident raises questions about the survivability of modern warships with their lightweight construction and a design emphasis on their weapons and sensors rather than on ship integrity and damage control.

It also raises questions about the basic training and seamanship skills of bridge watchkeepers.
The high-tech bridges of modern warships can be congested with both people and equipment.
This environment is not conducive to the exercise of basic safe seafaring practices, such as the use of the ‘seaman’s eye’.

Modern navies must ensure that their bridge personnel are safe seafarers, as well as skilled equipment operators.

Links :

Sunday, December 16, 2018

Sailing school (1956)

Newton Ferrers, Devon. 
The Newton Ferrers School of Yachting is run by Lt. Commander Rab Moore and his partner Dennis Montgomery.
It is the only school where students learn to sail on dry land first.
The students are seen gathered around the model of a yacht and Rab Moore points to various parts of the boat.
M/S of the land trainer as two women students get in and learn to sail without the hazards of the water.
They jump from one side to the other with the instructor watching.
Beautiful setting at Newton Ferrers on River Yealm as sailing boat sets out from harbour.
Man and woman take small dinghy out.
Then students are on board a large yacht being shown how to tie knots. On board the 30 foot Gaff Cutter "Ravenswing" the students learn how to put sails up correctly.
M/S "Ravenswing" sails along the River Yealm. 

Friday, December 14, 2018

Why deep oceans gave life to the first big, complex organisms

Fossil photo from the Ediacara Biota.
(Photo by James Gehling)

From Phys

In the beginning, life was small.
For billions of years, all life on Earth was microscopic, consisting mostly of single cells.
Then suddenly, about 570 million years ago, complex organisms including animals with soft, sponge-like bodies up to a meter long sprang to life.
And for 15 million years, life at this size and complexity existed only in deep water.

Scientists have long questioned why these organisms appeared when and where they did: in the deep ocean, where light and food are scarce, in a time when oxygen in Earth's atmosphere was in particularly short supply.
A new study from Stanford University, published Dec.12 in the peer-reviewed Proceedings of the Royal Society B, suggests that the more stable temperatures of the ocean's depths allowed the burgeoning life forms to make the best use of limited oxygen supplies.

Graphic showing origins of different Ediacarans
Thermal stability in the deep ocean fostered complex life
All of this matters in part because understanding the origins of these marine creatures from the Ediacaran period is about uncovering missing links in the evolution of life, and even our own species.
"You can't have intelligent life without complex life," explained Tom Boag, lead author on the paper and a doctoral candidate in geological sciences at Stanford's School of Earth, Energy & Environmental Sciences (Stanford Earth).

The new research comes as part of a small but growing effort to apply knowledge of animal physiology to understand the fossil record in the context of a changing environment.
The information could shed light on the kinds of organisms that will be able to survive in different environments in the future.

"Bringing in this data from physiology, treating the organisms as living, breathing things and trying to explain how they can make it through a day or a reproductive cycle is not a way that most paleontologists and geochemists have generally approached these questions," said Erik Sperling, senior author on the paper and an assistant professor of geological sciences.

Playful illustration shows the appearance of life on Earth as well as the events that preceded it (and were necessary for it).
Complex life develops in the ocean at first, but soon it will try to see how it is on land.
Sea animals are coming out on land in search of food and new experiences.

Goldilocks and temperature change

Previously, scientists had theorized that animals have an optimum temperature at which they can thrive with the least amount of oxygen.
According to the theory, oxygen requirements are higher at temperatures either colder or warmer than a happy medium.
To test that theory in an animal reminiscent of those flourishing in the Ediacaran ocean depths, Boag measured the oxygen needs of sea anemones, whose gelatinous bodies and ability to breathe through the skin closely mimic the biology of fossils collected from the Ediacaran oceans.

"We assumed that their ability to tolerate low oxygen would get worse as the temperatures increased.
That had been observed in more complex animals like fish and lobsters and crabs," Boag said.
The scientists weren't sure whether colder temperatures would also strain the animals' tolerance.
But indeed, the anemones needed more oxygen when temperatures in an experimental tank veered outside their comfort zone.

Together, these factors made Boag and his colleagues suspect that, like the anemones, Ediacaran life would also require stable temperatures to make the most efficient use of the ocean's limited oxygen supplies.

 Factors governing oxygen supply to animals. 
(a) Average annual partial pressure of O2 (pO2) in the global ocean at surface. 
(b) Average annual solubility of O2 (αO2) in the global ocean at surface. Values increase with latitude owing to the thermal effects on Henry's solubility coefficient. 
(c) Average annual diffusivity of O2(DO2) in the global ocean at surface. 
(d) Average annual bioavailability of O2 in the global ocean at surface, expressed using the oxygen supply index (OSI)
Despite the increased solubility of O2 in cold water, the kinematic viscosity also increases substantially, reducing the diffusivity of O2 at a rate greater than the offsetting effect on solubility.
As a result, the supply of O2 to respiratory surfaces actually decreases approximately linearly as water becomes colder.

Refuge at depth

It would have been harder for Ediacaran animals to use the little oxygen present in cold, deep ocean waters than in warmer shallows because the gas diffuses into tissues more slowly in colder seawater.
Animals in the cold have to expend a larger portion of their energy just to move oxygenated seawater through their bodies.

But what it lacked in useable oxygen, the deep Ediacaran ocean made up for with stability.
In the shallows, the passing of the sun and seasons can deliver wild swings in temperature—as much as 10 degrees Celsius (50 degrees F.) in the modern ocean, compared to seasonal variations of less than 1 degree Celsius at depths below one kilometer (.62 mile).
"Temperatures change much more rapidly on a daily and annual basis in shallow water," Sperling explained.

Impact of seasonal temperature variation on aerobic respiration in low pO2 conditions
In a world with low oxygen levels, animals unable to regulate their own body temperature couldn't have withstood an environment that so regularly swung outside their Goldilocks temperature.

The Stanford team, in collaboration with colleagues at Yale University, propose that the need for a haven from such change may have determined where larger animals could evolve.
"The only place where temperatures were consistent was in the deep ocean," Sperling said.
In a world of limited oxygen, the newly evolving life needed to be as efficient as possible and that could only be achieved in the relatively stable depths.
"That's why animals appeared there," he said.

Links :

Thursday, December 13, 2018

Mechanics of Nazaré

Nazare in full glory.
Photo: Andre Bothelo

From Surfline

How one break in Portugal creates the world's largest waves

A wave that produces the 80-foot Guinness world record for largest wave ever ridden needs no introduction.
Even to the non-surfing community, little is needed when mainstream media regularly runs photos and videos of every XXL swell that hits the small Portuguese fishing town.
Hell, CNN’s Anderson Cooper even rode through the rocks on the back of a ski — piloted by none other than previous world record holder (also caught at Nazaré), Garrett McNamara.

Above: Under the right conditions, XL Nazare looks almost inviting.
Photo: Jeremiah Klein

Nazaré is well known for good reason.
It regularly produces the largest rideable waves on planet Earth.
And thanks to the ultimate deepwater canyon set up, Nazaré’s surf size potential is only bound by the size and direction of the swell it receives.

Swell Source
  • Strongest swells of year from October through April when intense mid-latitude frontal lows track eastward across the North Atlantic, interacting with adjacent high pressure.
  • Typical storm track moves towards Europe helping maximize swell potential.
  • Strongest swells from WNW to NW, are often consistent, ranging from short to long period.
  • Travel time from one to five days.
  • Peak hurricane season from mid-August through mid-October can offer a variety of swell directions. Recurving tropical cyclones often undergo extratropical transition (most common, October) or enhance developing winter storms. Tropical systems can impact the region with wind and weather, like Cyclone Leslie in October 2018.
  • Local windswell events do occur and can provide fun surf. Events are not as strong as above mentioned swell sources and do not produce the signature XL surf.
The preferred swell source and swell window for Nazare.

Swell Window
  • Nazaré’s swell window is technically open from SW (226°) to N (357°). West to NW angled swells are strongest and most common; WNW swells are ideal.
  • Between the Peniche peninsula at 226° and 251° lies a small group of islands known as the Berlengas Archipelago. A fraction of swell energy filters through these islands and are not as strong as more prominent West-NW swells.
  • Southerly angled swells are usually local windswell events (e.g., ahead of an approaching front), often coinciding with unfavorable onshore wind.
  • Nazaré receives more northerly angled swells up to 357°. North-northwest to N swells are not a favorable direction — shorter period swells generally sweep across the beach, longer period swells see an occasional canyon set that is too crossed up. Wave amplification through refraction by the canyon and associated constructive interference is not as impactful from the north.
Doesn’t look very playful now, does it? And good luck timing the sets.
Photo: Andre Bothelo

Bathymetry

Bathymetry is vital in how waves behave when approaching and breaking along shore, refracting energy into or away from different locations with each variation in swell direction or period.
The surf at certain points can be amplified to greater heights, while other spots are left in a swell void.
And the best spot on the planet to observe extreme wave refraction is Nazaré.

The large, deepwater Nazaré Canyon has the potential to significantly amplify the surf at the beach just to the north of the bay.
Wave-face height can multiply three, four, even five times the offshore deepwater swell height.
But this magnification is highly dependent on the incoming swell angle and period.
Generally, Nazaré favors a longer period swell from the WNW.

Energy in longer period swells extends deeper within the water column, feeling the contours of the ocean bottom sooner, and with a greater degree of effect.
Since swells always refract toward shallower water, longer period swells start to turn and bend sooner and more effectively.


For Nazaré, there is a steep contrast between the large and deep canyon running offshore and the much shallower ridge that lines the northern slope.
This canyon/ridge relationship extends a long distance far offshore all the way up to the break.
The portion of the swell running through the deep canyon maintains a greater percentage of its raw open ocean energy and forward speed closer to shore.
And upon interacting with the adjacent ridge, much of this energy will refract out of the canyon and focus back in toward the break.

The various bends of the canyon also play a role, helping create a more complex scenario of refracting and converging waves.
Meanwhile, the inbound swell traveling over the shallower water north of the canyon starts to gradually slow down and shoal when nearing the coast — and much of this energy focuses toward Nazaré as well.
The result is a compression of these refracting swell lines as they converge at the break, amplifying the waves.

However, there is another key factor at work besides just refractive pileup that helps contribute to the extreme magnification of the waves here, and that is constructive interference (Note – A spot like the Wedge also has this X-factor going on).
After extensive research on the bathymetry, running various swell scenarios through our high-powered computer simulations, and athlete observations, we do know the “magic numbers” for the canyon to perform at its maximum potential.
And the direction is just as important as the period.

From Surfline Labs: Animation (4x normal speed) from Surfline Labs shows the swell that provided the world record wave on November 8th, 2017.
It shows the effects of the Nazare canyon to create mutant, XXL peaks.
The reds indicate peaks, the blues show the troughs — the biggest (red) peaks only appear sporadically when a multitude of factors come perfectly together.
It was timing, and luck, that allowed Ricardo Koxa to catch, and break, the world record wave.

Given the unique layout of this underwater landscape, incoming long period swells from the WNW are ideal for Nazaré.
These swells have just enough west in them to allow the canyon to refract at its fullest potential, yet just enough north that most of the swell is refracting back toward this particular stretch of beach, instead of away to the south.
The north component allows the portion of the swell not running through the canyon to converge with the waves refracting out of the canyon — a combo of NW and SW waves in the surf zone.

If there is too much north in the swell, then the canyon has difficulty refracting swell back toward the north, thus providing less energy and lowering the potential for larger surf.
The sets that do refract from the canyon are almost too peaky with more slopey, mushy shoulders.
There is often more current running on these more northerly angled swells as well.

For W to WSW swells, the refracting energy from the canyon is more evenly split to the north and south, also lowering the potential for larger surf at Nazaré.
SW swells are partially shadowed by offshore islands.
The surf is not as peaky on these swells, as west lines north of the canyon square up more to the coast with less convergence from waves refracting from the canyon.
For more southerly angled swells, the canyon refracts more energy to areas to the south, considerably lowering the refracting factor and peaky nature of Nazaré.

A ski is not required to ride this train — until it gets to a certain size.
Photo: Klein

Wind

Like most spots, Nazaré prefers calm or light to moderate offshore wind (east to southeast).
Strong offshore wind can create hazardous conditions and is almost as problematic as an onshore wind, especially in big surf.
Strong offshores make it very difficult to paddle into waves and creates surface chop running up the wave faces.
Bigger, faster-moving waves have a greater opposition to stronger offshore flow, aggravating the sea surface even more.

High pressure overhead or to the north to northeast of Portugal sets up offshore flow for Nazare.

Located on the far southwestern edge of Europe, Nazaré fares better than those at higher latitudes when it comes to severe winter weather.
Systems tracking through the higher latitudes, or storms that lift northward before nearing Europe, can provide good swell with less adverse local weather.

But storms tracking through the lower latitudes can bring poor wind and weather along with swell.
Approaching fronts often bring onshore winds and stormy conditions to the region.
High pressure building in behind these fronts, either over the region or to the north or northeast, turns the wind offshore and improves local weather.
Nazaré can handle light onshores as the waves themselves block the wind on big days and the cliffs shelter the waves from a southerly wind.

The view from the cliff.
Safer than a view from the water.
Photo: Klein

Best Conditions for Nazaré
  • Best Tide: Mid, prefers incoming
  • Best Swell Direction: West-Northwest to Northwest
  • Best Swell Period: Longer period
  • Best Wind: Calm or light to moderate offshore (east-southeast)
  • Best Size: Works on all sizes, no limit on max size
  • Best Season: Fall generally best, winter and spring very solid too
  • Resources for Nazaré
Links :

Wednesday, December 12, 2018

Sails make a comeback as shipping tries to go green

Car manufacturer, Groupe Renault, is partnering with French designer and operator of cargo sailing ships, NeoLine, to reduce the carbon footprint of the Group’s supply chain.
NeoLine has designed a 136-meter ro-ro with 4,200 square meters of sail area it says has the potential to reduce CO2 emissions by up to 90 percent through the use of wind power primarily, combined with a cost-cutting speed and optimized energy mix. commission the vessels by 2020-2021 on a pilot route joining Saint-Nazaire in France, the U.S. Eastern seaboard and Saint-Pierre and Miquelon (off the coast of Newfoundland in Canada).

From The Sentinel by Kelvin Chan

As the shipping industry faces pressure to cut climate-altering greenhouse gases, one answer is blowing in the wind.

European and U.S. tech companies, including one backed by airplane maker Airbus, are pitching futuristic sails to help cargo ships harness the free and endless supply of wind power.
While they sometimes don't even look like sails -- some are shaped like spinning columns -- they represent a cheap and reliable way to reduce CO2 emissions for an industry that depends on a particularly dirty form of fossil fuels.

The merchant shipping industry releases 2.2% of the world’s carbon emissions, about the same as Germany, and the International Maritime Organization estimates that could increase up to 250% by 2050 if no action is taken.
Finnish company Norsepower may have a solution in the spinning cylinders they’ve designed for ships to harness wind power and produce forward thrust.
The result is a ship that needs less fuel to travel the seas - a major boost to the industry that transports 90% of international trade.
VICE News took a ride on the Estraden, a cargo ship fitted with Norsepower Rotor Sails, to see the technology that can reduce a ship’s carbon emissions by 1000 tons per year.
If all 50,000 merchant ships adopted Norsepower Rotor Sails, the costs saved on fuel would be over $7 billion a year, and the emissions prevented would equal more than 12 coal fired power plants.
While zero emission ships could be achieved using Rotor Sails paired with other alternative fuel sources, the economic incentives haven’t been strong enough to mobilize the industry just yet.
But strides such as those taken by Norsepower could help kickstart a widescale greening of the industry.

"It's an old technology," said Tuomas Riski, the CEO of Finland's Norsepower, which added its "rotor sail" technology for the first time to a tanker in August.
"Our vision is that sails are coming back to the seas."

Denmark's Maersk Tankers is using its Maersk Pelican oil tanker to test Norsepower's 30 meter (98 foot) deck-mounted spinning columns, which convert wind into thrust based on an idea first floated nearly a century ago.

Separately, A.P. Moller-Maersk, which shares the same owner and is the world's biggest container shipping company, pledged this week to cut carbon emissions to zero by 2050, which will require developing commercially viable carbon neutral vessels by the end of next decade.

This is Enercon's E-Ship 1 128m cargo vessel built in 2010 designed for the transportation of wind turbine components. She is a most unusual looking ship featuring four 27m tall Flettner Rotor Sails which rotate rapidly, due to the magnus effect this design helps reduce engine fuel costs with greater efficiency.

The shipping sector's interest in "sail tech" and other ideas took on greater urgency after the International Maritime Organization, the U.N.'s maritime agency, reached an agreement in April to slash emissions by 50 percent by 2050.

Transport's contribution to earth-warming emissions are in focus as negotiators in Katowice, Poland, gather for U.N. talks to hash out the details of the 2015 Paris accord on curbing global warming.

Beluga Projects SkySails

Shipping, like aviation, isn't covered by the Paris agreement because of the difficulty attributing their emissions to individual nations, but environmental activists say industry efforts are needed.
Ships belch out nearly 1 billion tons of carbon dioxide a year, accounting for 2-3 percent of global greenhouse gases. The emissions are projected to grow between 50 to 250 percent by 2050 if no action is taken.

Notoriously resistant to change, the shipping industry is facing up to the need to cut its use of cheap but dirty "bunker fuel" that powers the global fleet of 50,000 vessels -- the backbone of world trade.

The IMO is taking aim more broadly at pollution, requiring ships to start using low-sulfur fuel in 2020 and sending ship owners scrambling to invest in smokestack scrubbers, which clean exhaust, or looking at cleaner but pricier distillate fuels.

The GoodShipping Program is the world’s first initiative to decarbonize container shipping by changing the marine fuel mix – switching from heavy fuel oil towards sustainable marine fuel.
The Program enables cargo owners to make a change: their footprint from shipping will be reduced significantly, regardless of existing contracts, cargo routes and volumes.

A Dutch group, the Goodshipping Program , is trying biofuel, which is made from organic matter.
It refueled a container vessel in September with 22,000 liters of used cooking oil, cutting carbon dioxide emissions by 40 tons.

In Norway, efforts to electrify maritime vessels are gathering pace, highlighted by the launch of the world's first all-electric passenger ferry, Future of the Fjords, in April.
Chemical maker Yara is meanwhile planning to build a battery-powered autonomous container ship to ferry fertilizer between plant and port.
Ship owners have to move with the times, said Bjorn Tore Orvik, Yara's project leader.
Building a conventional fossil-fueled vessel "is a bigger risk than actually looking to new technologies ... because if new legislation suddenly appears then your ship is out of date," said Orvik.

Batteries are effective for coastal shipping, though not for long-distance sea voyages, so the industry will need to consider other "energy carriers" generated from renewable power, such as hydrogen or ammonia, said Jan Kjetil Paulsen, an advisor at the Bellona Foundation, an environmental non-government organization.
Wind power is also feasible, especially if vessels sail more slowly.
"That is where the big challenge lies today," said Paulsen.

The performance of the EcoFlettner, which has been tested on the MV Fehn Pollux since July, clearly exceeds the expectations of the scientists.
“The data we have evaluated so far signifcantly outmatch those of our model calculations,” says Professor Michael Vahs, who has been researching the topic of wind propulsion for seagoing vessels at the University of Applied Science Emden / Leer for more than 15 years.
“In perfect conditions, this prototype delivers more thrust than the main engine.”
15 companies from around Leer have been involved in the development and construction of the sailing system. The whole project is funded by the EU and coordinated by Mariko in Leer.
The rotor is 18 meters high and has a diameter of three meters.
After lengthy test runs ashore, the rotor is now being tested under real conditions aboard 90- meter-long multi-purpose freighter MV Fehn Pollux.
On board MV Fehn Pollux more than 50 different data are continuously collected and computed in real time by the Flettner control system on the bridge.
The computer uses the data to calculate the optimum settings for the rotor under the current conditions.

Wind power looks to hold the most promise.
The technology behind Norsepower's rotor sails, also known as Flettner rotors, is based on the principle that airflow speeds up on one side of a spinning object and slows on the other.
That creates a force that can be harnessed.

Rotor sails can generate thrust even from wind coming from the side of a ship.
German engineer Anton Flettner pioneered the idea in the 1920s but the concept languished because it couldn't compete with cheap oil.
On a windy day, Norsepower says rotors can replace up to 50 percent of a ship's engine propulsion. Overall, the company says it can cut fuel consumption by 7 to 10 percent.

Maersk Pelican: Trialling a pair of Norsepower Rotors under trading conditions

Maersk Tankers said the rotor sails have helped the Pelican use less engine power or go faster on its travels across, resulting in better fuel efficiency, though it didn't give specific figures.

One big problem with rotors is they get in the way of port cranes that load and unload cargo.
To get around that, U.S. startup Magnuss has developed a retractable version.
The New York-based company is raising $10 million to build its concept, which involves two 50-foot (15-meter) steel cylinders that retract below deck.
"It's just a better mousetrap," said CEO James Rhodes, who says his target market is the "Panamax" size bulk cargo ships carrying iron ore, coal or grain.


High tech versions of conventional sails are also on the drawing board.
Spain's bound4blue's aircraft wing-like sail and collapses like an accordion, according to a video of a scaled-down version from a recent trade fair.
The first two will be installed next year followed by five more in 2020.
The company is in talks with 15 more ship owners from across Europe, Japan, China and the U.S. to install its technology, said co-founder Cristina Aleixendrei.

Links :

Tuesday, December 11, 2018

Can Artificial Intelligence help build better, smarter climate models?

A computer simulation of carbon dioxide movement in the atmosphere.
The ‘Cloud Brain’ might make it possible to tighten up the uncertainties of how the climate will respond to rising carbon dioxide.
NASA

From e360 by Nicola Jones

Researchers have been frustrated by the variability of computer models in predicting the earth’s climate future.
Now, some scientists are trying to utilize the latest advances in artificial intelligence to focus in on clouds and other factors that may provide a clearer view.

Look at a digital map of the world with pixels that are more than 50 miles on a side and you’ll see a hazy picture: whole cities swallowed up into a single dot; Vancouver Island and the Great Lakes just one pixel wide.
You won’t see farmer’s fields, or patches of forest, or clouds.
Yet this is the view that many climate models have of our planet when trying to see centuries into the future, because that’s all the detail that computers can handle.
Turn up the resolution knob and even massive supercomputers grind to a slow crawl.
“You’d just be waiting for the results for way too long; years probably,” says Michael Pritchard, a next-generation climate modeler at the University of California, Irvine.
“And no one else would get to use the supercomputer.”

Earth recently experienced its largest annual increases in atmospheric carbon dioxide levels in at least 2,000 years.
These exchanges vary from year to year, and scientists are using OCO-2 data to uncover the reasons.
The many and varied uses of OCO-2 data will continue to be essential to understanding the dynamics of carbon dioxide across our planet and will help contribute to improved long-term climate forecasting.
NASA has released a video that explains the study, shows changing level of CO2

The problem isn’t just academic: It means we have a blurry view of the future.
It is hard to know if, importantly, a warmer world will bring more low-lying clouds that shield Earth from the sun, cooling the planet, or fewer of them, warming it up.
For this reason and more, the roughly 20 models run for the last assessment of the Intergovernmental Panel on Climate Change (IPCC) disagree with each other profoundly: Double the carbon dioxide in the atmosphere and one model says we’ll see a 1.5 degree Celsius bump; another says it will be 4.5 degrees C.
“It’s super annoying,” Pritchard says.
That factor of three is huge — it could make all the difference to people living on flooding coastlines or trying to grow crops in semi-arid lands.

Pritchard and a small group of other climate modelers are now trying to address the problem by improving models with artificial intelligence.
(Pritchard and his colleagues affectionately call their AI system the “Cloud Brain.”) Not only is AI smart; it’s efficient.
And that, for climate modelers, might make all the difference.

Computer hardware has gotten exponentially faster and smarter — today’s supercomputers handle about a billion billion operations per second, compared to a thousand billion in the 1990s.
Meanwhile a parallel revolution is going on in computer coding.
For decades, computer scientists and sci-fi writers have been dreaming about artificial intelligence: computer programs that can learn and behave like real people.
Starting around 2010, computer scientists took a huge leap forward with a technique called machine learning, specifically “deep learning,” which mimics the complex network of neurons in the human brain.

Traditional computer programming is great for tasks that follow rules: if x, then y.
But it struggles with more intuitive tasks for which we don’t really have a rule book, like translating languages, understanding the nuances of speech, or describing what’s in an image.
This is where machine learning excels.
The idea is old, but two recent developments finally made it practical — faster computers, and a vast amount of data for machines to learn from.
The internet is now flooded with pre-translated text and user-labelled photographs that are perfect for training a machine-learning program.

Companies like Microsoft and Google jumped on deep learning starting in the early 2010s, and have used it in recent years to power everything from voice recognition on smart phones to image searches on the internet.
Scientists have started to pick up these techniques too.
Medical researchers have used it to find patterns in datasets of proteins and molecules to guess which ones might make good drug candidates, for example.
And now deep learning is starting to stretch into climate science and environmental projects.

Researchers hope incorporating artificial intelligence into climate models will further understanding of how clouds, shown here over Bangladesh, will act in a warmer world.
Typical global climate models have pixel sizes far too large to see individual clouds or storm fronts.
The ‘Cloud Brain’ tends to get confused when given scenarios outside its training, such as a much warmer world.
NASA/International Space Station

Microsoft’s AI for Earth project, for example, is throwing serious money at dozens of ventures that do everything from making homes “smarter” in their use of energy for heating and cooling, to making better maps for precision conservation efforts.
A team at the National Energy Research Scientific Computing Center in Berkeley is using deep learning to analyze the vast reams of simulated climate data being produced by climate models, drawing lines around features like cyclones the way a human weather forecaster might do.
Claire Monteleoni at the University of Colorado, Boulder, is using AI to help decide which climate models are better than others at certain tasks, so their results can be weighed more heavily.

But what Pritchard and a handful of others are doing is more fundamental: inserting machine learning code right into the heart of climate models themselves, so they can capture tiny details in a way that is hundreds of times more efficient than traditional computer programming.
For now they’re focused on clouds — hence the name “Cloud Brain” — though the technique can be used on other small-scale phenomena.
That means it might be possible to tighten up the uncertainties of how the climate will respond to rising carbon dioxide, giving us a clearer picture of how clouds might shift and how temperatures and rainfall might vary — and how lives will likely to be affected from one small place to the next.

So far these attempts to hammer deep learning code into climate models are in the early stages, and it’s unclear if they’ll revolutionize model-making or fall flat.

The problem that the Cloud Brain tackles is a mismatch between what climate scientists understand and what computers can model — particularly with regard to clouds, which play a huge role in determining temperature.

While some aspects of cloud behavior are still hard to capture with algorithms, researchers generally know the physics of how water evaporates, condenses, forms droplets, and rains out.
They’ve written down the equations that describe all that, and can run small-scale, short-term models that show clouds evolving over short time periods with grid boxes just a few miles wide.
Such models can be used to see if clouds will grow wispier, letting in more sunlight, or cool the ground by shielding the sun.
But try to stick that much detail into a global-scale, long-term climate model, and it will go about a million times slower.
The general rule of thumb, says Chris Bretherton at the University of Washington, is if you want to cut your grid box dimensions in half, the computation will take 10 times as long.
“It’s not easy to make a model much more detailed,” he says.

The supercomputers that crunch these models cost somewhere in the realm of $100 million to build, says David Randall, a Colorado State University climate modeler; a month’s-worth of time on such a machine could cost millions.
Those fees don’t actually show up in an invoice for any given researcher; they’re paid by institutions, governments, and grants.
But the financial investment means there’s real competition for computer time.
For this reason, typical global climate models like the ones used thus far in IPCC reports have pixel sizes tens of miles wide — far too large to see individual clouds or even storm fronts.

The trick that Pritchard and others are attempting is to train deep learning systems with data from short-term runs of fine-scale cloud models.
This lets the AI basically develop an intuitive sense for how clouds work.
That AI can then be jimmied into a bigger-pixel global climate model, to shove more realistic cloud behavior into something that’s cheap and fast enough to run.

Pritchard and his two colleagues trained their Cloud Brain on high-resolution cloud model results, and then tested it to see if it would produce the same simulated climates as the slower, high-resolution model.
It did, even getting details like extreme rainfalls right, while running about 20 times faster.

Others — including Bretherton, a former colleague of Pritchard’s, and Paul O’Gorman, a climate researcher at MIT, are doing similar work.
The details of the strategies vary, but the general idea — using machine learning to create a more-efficient programming hack to emulate clouds on a small scale — is the same.
The approach could likewise be used to help large global models incorporate other fine features, like miles-wide eddies in the ocean that bedevil ocean current models, and the features of mountain ranges that create rain shadows.

The scientists face some major hurdles.
The fact that machine learning works almost intuitively, rather than following a rulebook, makes these programs computationally efficient.
But it also means that mankind’s hard-won understanding about the physics of gravitational forces, temperature gradients, and everything else, gets set aside.
That’s philosophically hard to swallow for many scientists, and also means that the resulting model might not be very flexible: Train an AI system on oceanic climates and stick it over the Himalayas and it might give nonsense results.
O’Gorman’s results hint that his AI can adapt to cooler climates but not warmer ones.
And Cloud Brain tends to get confused when given scenarios outside its training, such as a much warmer world.
“The model just blows up,” says Pritchard.
“It’s a little delicate right now.” Another disconcerting issue with deep learning is that it’s not transparent about why it’s doing what it’s doing, or why it comes to the results that it does.
“Basically it’s a black box; you push a bunch of numbers in one end and a bunch of numbers come out the other end,” says Philip Rasch, chief climate scientist at the Pacific Northwest National Laboratory.
“You don’t know why it’s producing the answers it’s producing.”

“In the end, we want to predict something that no one has observed,” says Caltech’s Tapio Schneider.
“This is hard for deep learning.”
For all these reasons, Schneider and his team are taking a different approach.
He is sticking to physics-based models, and using a simpler variant of machine learning to help tune the models.
He also plans to use real data about temperature, precipitation, and more as a training dataset.
“That’s more limited information than model data,” he says.
“But hopefully we get something that’s more predictive of reality when the climate changes.” Schneider’s well-funded effort, called the Climate Machine, was announced this summer but hasn’t yet been built.
No one yet knows how the strategy will pan out.

 Using a combination of cloud data, such as this satellite observation of a tropical storm over South America, and "machine learning" could help to fine-tune climate models. 
The ‘Cloud Brain’ tends to get confused when given scenarios outside its training, such as a much warmer world. 
NASA/Goddart Space Flight Center/Scientific Visualization Studio

The utility of these models for predicting the future climate is the biggest uncertainty.
“That’s the elephant in the room,” says Pritchard, who remains optimistic that he can do it, but accepts that we’ll simply have to wait and see.
Randall, who is watching the developments with interest from the sidelines, is also hopeful.
“We’re not there yet,” he says, “but I believe it will be very useful.”

Climate scientist Drew Schindell of Duke University, who isn’t working with machine learning himself, agrees.
“The difficulty with all of these things is we don’t know that the physics that’s important to short-term climate are the same processes important to long-term climate change,” he says.
Train an AI system on short-term data, in other words, and it might not get the long-term forecast right.
“Nevertheless,” he adds, “it’s a good effort, and a good thing to do.
It’s almost certain it will allow us to improve coarse-grid models.”

In all these efforts, deep learning might be a solution for areas of the climate picture for which we don’t understand the physics.
No one has yet devised equations for how microbes in the ocean feed into the carbon cycle and in turn impact climate change, notes Pritchard.
So, since there isn’t a rulebook, AI could be the most promising way forward.
“If you humbly admit it’s beyond the scope of our physics, then deep learning becomes really attractive,” Pritchard says.

Bretherton makes the bullish prediction that in about three years a major climate-modeling center will incorporate machine learning.
If his forecast prevails, global-scale models will be capable of paying better attention to fine details — including the clouds overhead.
And that would mean a far clearer picture of our future climate.

Links :

Monday, December 10, 2018

How ordinary ship traffic could help map the uncharted Arctic Ocean seafloor

A cargo ship sails through multi-year ice in Canada’s the Northwest Passage.
(Timothy Keane / Fednav)

From Arctic Today by Melody Schreiber

Equipping every ship that enters the Arctic with sensors could help fill critical gaps in maritime charts.

Throughout world, the ocean floor’s details remain largely a mystery; less than 10 percent has been mapped using modern sonar technology.
Even in the United States, which has some of the best maritime maps in the world, only one-third of the ocean and coastal waters have been mapped to modern standards.

This map shows unique ship visits to Arctic waters
between September 1, 2009, and December 31, 2016.

But perhaps the starkest gaps in knowledge are in the Arctic.
Only 4.7 percent of the Arctic has been mapped to modern standards.

“Especially when you get up north, the percentage of charts that are basically based on Royal Navy surveys from the 19th century is terrifying — or should be terrifying,” said David Titley, a retired U.S. Navy Rear Admiral who directs the Center for Solutions to Weather and Climate Risk at the Pennsylvania State University.
Titley spoke alongside several other maritime experts at a recent Woodrow Wilson Center event on marine policy, highlighting the need for improved oceanic maps.

 GeoGarage nautical raster chart coverage with material from international Hydrographic Offices
red : US NOAA / grey : Canada CHS /  black : Denmark Greenland DGA / yellow ; Norway NHS

 GeoGarage nautical raster chart coverage (NGA material)

 Catalogue of charts from
Department of Navigation and Oceanography of the Russian Federation

When he was on active duty in the Navy, Titley said, “we were finding sea mounts that we had no idea were there.
And conversely, we were getting rid of sea mounts on charts that weren’t there.”
The problem, he said, comes down to accumulating — and managing — data. But there could be an intriguing solution: crowdsourcing.
“How does every ship become a sensor?” Titley asks.
Ships outfitted with sensors could provide the very information they need to travel more effectively.

Each ship would collect information on oceans, atmosphere, ecosystems, pollutants and more.
As the ships traverse the ocean, they would help improve existing maps and information about the waters they tread.


Maps are becoming more important as shipping activity increases — both around the world and in the Arctic.

In August, the Russian research ship Akademik Ioffe ran aground in Canada’s Arctic. In 2015, the Finnish icebreaker Fennica ripped a three-foot gash in its hull — while sailing within the relatively better charted waters of Alaska’s Dutch Harbor.

“The traditional way that we have supplied these ships with information — with nautical charts and predicted tides and tide tables, and weather over radio facts — are not anywhere near close to being what’s necessary,” said Rear Admiral Shep Smith, director of NOAA’s Office of Coast Survey.
The “next generation of services” would go much further, predicting the water level, salinity, and other information with more precision and detail.
One of NOAA’s top priorities, Smith said, is “the broad baseline mapping of the ocean — including the hydrography, the depth and form of the sea floor, and oceanography.”
Such maps are necessary to support development, including transportation, offshore energy, fishing and stewardship of natural resources, he said.

 A team of engineers and students from the University of New Hampshire’s Center for Coastal and Ocean Mapping recently returned from a voyage that deployed the first autonomous (robotic) surface vessel — the Bathymetric Explorer and Navigator (BEN) — from a NOAA ship far above the Arctic Circle. Credit: Courtesy Christina Belton, NOAA

In NOAA’s records of U.S. waters and coasts, they have at least one piece of information on only 41 percent of the ocean.
“The other 59 percent, there’s potentially a gold mine of economically important information in there,” he continued. “Or environmentally important information.”
NOAA struggles even to model how water moves in the ocean without more information, he said.

They are turning to crowdsourcing, satellite-derived bathymetry — and the idea of turning every ship into a sensor.
Projects like Seabed 2030 — a worldwide effort to map the seabed — will be crucial to these efforts, Smith said.
“It’s hard to map the bottom of the ocean,” said Rear Admiral Jon White, president and CEO of the Consortium for Ocean Leadership.
“It’s like trying to map your backyard with ants, with the ships that we have.”

However, he said, the technology to do so is improving.
“There are great opportunities for the people who understand this technology, to make new ways, better ways to actually map it faster,” White said.
Moving forward, he said, both federal investment and public-private partnerships should focus on “getting every ship to be a sensor in the ocean.”
That effort will be crucial for accomplishing “all the things that we’re trying to do in the maritime environment,” he said.

Links: