Suit for ASUS TRANSFORMER BOOK FLIP TP550LB laptop screen repair, Factory resource original parts guarantee the quality, extremely strict QC process control the faulty rate under 3 percent, with reasonable price among all the screen suppliers in Shenzhen China, fast delivery and good after-sales service is provided, Sincerely looking forward to build a long term business relationship with all our heart.
Screen size: 15.6 inches
Model: ASUS TRANSFORMER BOOK FLIP TP550LB touch screen digitizer replacement
Condition: High Quality Touch screen grade A (no bubbles)
Contain: touch panel glass
3.Product feature and application
ASUS TRANSFORMER BOOK FLIP TP550LA touch screen digitizer replacement, Brand new condition.
4. More about us
5. Package and Warranty
Every item will be tested before delivery and carefully and packed with individual plastic bubble bag, braced with stiff cardboard, and final ship in carton.
This product warranty for 3 months from the date of receipt of the customer,
The human factor damage is not covered by the warranty
The seller is not covered by the warranty shipping costs
The refund will be return after the return parcel received. all goods are labeling
Identification do not tear, Otherwise will be no warranty, thank you for your understanding.
Q1: How dose the RMA process works.
A: We have warehouse in both HongKong and America to collect the faulty products, once the faulty situation is confirmed, we will send new replacements in next package.
Q2: How could I buy products if there is no price on the website?
A: Buyers need to send required product details via e-mail, Then a reasonable quotation as request would be provided.
Q3: How long could I get reply of my enquiry?
A: Buyers will get a reply within 24 hours.
Q4: When will you deliver the goods?
A: Shipment within 3-5 days after receipt of payment
Q5: What to do when I get goods?
A: Pls check and test the goods once you've received, and make sure that all of them are good function.
7. Latest news
AI systems are a threat – but not the way Elon Musk claims
In 2017, Elon Musk claimed that AI is one of the greatest threats to the human race. “AI is the rare case where I think we need to be proactive in regulation instead of reactive,” he told students from Massachusetts Institute of Technology (MIT). “Because I think by the time we are reactive in AI regulation, it’ll be too late. AI is a fundamental risk to the existence of human civilization.”
However, according to Tom Siebel, founder and CEO of C3, there are two much more pressing concerns: privacy and the vulnerability of the Internet of Things.
Siebel is one of the leading names in AI. He began his career as a computer scientist at Oracle and founded his own company, Siebel Systems, in 1993. By 2000 the company had 8,000 employees in 29 countries and $2bn in revenue. The company merged with Oracle in 2006, and Siebel founded C3 in 2009.
C3 has spent about 10 years and half a billion dollars building a platform for an AI suite, and its clients include the US Air Force, Shell and John Deere to develop industrial-scale applications. Its systems help reduce greenhouse gas emissions, predict hardware failure for offshore oil rigs, fighter jets and tractors, and assist banks with preventing money-laundering.
He says that instead of attempting the impossible task of regulating AI algorithms (a proposal he says was a publicity stunt from Musk), we should be focusing on the far more real threat AI systems pose to our privacy.
AI for social good
“Let's think about using AI for precision medicine, which will be done at massive scale,” he says. “We might aggregate the healthcare records for the population of the UK, or the population of the United States – pharmacology, radiology, health history, blood count history, all of this data. That’s a big data set. And then in the future, we’ll also have the genome sequence for all these people.”
These systems could be used for predicting the onset of disease, and providing the best possible treatment for an individual.
“We can use AI to assist physicians making diagnoses,” says Siebel, “for example, reading radiographs or CAT scans and advising them. But we're looking all the data – blood chemistry, whatever – and advising on which diseases [they] should be looking for.
“And then, when one is selected, we’ll have again human-specific or genome-specific treatment protocols. So we’ll be able to predict with very high levels of prevision adverse drug reactions, who is predisposed towards addiction (for example, to opiates). And efficacy – what is the optimal pharmaceutical product or combination of pharmaceutical products to treat this disease?
“And so, for example, if we could for a population size the United States or the UK, identify who is predisposed to come down with diabetes for the next five years with high levels of precision, we can treat those people clinically now rather than treat them in the emergency room in five years. And the social and economic implications of that are staggering.”
Your privacy at risk
So far, so positive – but there’s another side of the equation. “Now we know who's gonna come out with diabetes, we know who's going to be a diagnosed with terminal illness as well,” Siebel says. “Do you want to know that? I'm not sure I do – but either the government medical service knows it, or the insurance aggregator knows it, and what are they going to do with those data? We’re seeing that when it comes to personally identifiable data, corporations are not regulating themselves.” Siebel cites Facebook as the most obvious example.
“So how will these data be used? Will they be used for prioritizing who gets treatment? Will they be used for setting insurance rates? Who needs to know?”
As Siebel notes, in the United States people who have a pre-existing condition often find it hard (or very expensive) to secure health insurance – and with AI-supported healthcare, things could be even worse.
“Who cares about pre-existing condition when we know what you're going to be diagnosed with? So the implications of how people deal with these kinds of data are really very troubling.”
Bringing down the grid
Then there’s the Internet of Things, which Siebel says is extremely vulnerable to attack – with potentially catastrophic consequences.
“I think there are troubling issues associated with the how fragile these systems are, like power systems and banking systems,” he says. “If you shut down the power system or the utility system of the UK or the United States, I think something like nine out of ten people in the population die. All supply chains stop.
“Electrical power is the bottom of Maslow's Hierarchy of 21st century civilization. All other systems – whether it's whether it's security, food supply, water distribution, defense, financial services – they're all dependent upon it, so if the grid doesn't work isn't your milk on the shelf in the grocery store. So these are very troubling issues.”
So what's the answer? Siebel says the EU has started to put a dent in these problems with its General Data Protection Regulation, but together with national governments, it needs to go a lot further.
What certainly shouldn’t happen is the creation of a government agency to audit AI algorithms. “Elon is one of the smartest people in the information technology industry in the world," says Siebel, "but with all due respect, a lot of his comments in the last three years do not appear to be that well-grounded.
"The idea that we’re going to have government agencies that are going to regulate AI algorithms is just crazy. When does a computer algorithm become AI? Nobody can draw that line, and if you put some government agency on it, it’s just going to be a big mess. But privacy is something they can protect, and they need to protect.
"That might fly in the face of First Amendment rights, but if they don't act, a lot of people are going to be hurt."