Categories
South Caucasus News

Boeing says wiring flaws may delay first quarter 737 MAX deliveries


“Our 737 program is performing rework on a group of airplanes to fix wires that have small scratches due to a machining error,” Boeing said, adding that production of its new MAX jets continues at the existing rate of 42 jets a month, News.Az reports, citing CNN.
***
Boeing shares were off nearly 1% in afternoon trading on Tuesday.
The company said it plans to increase the rate to 47 jets a month later this year and is opening a fourth 737 assembly line at its Everett, Washington plant this summer. The company wants to get to 63 737 jets a month in the next few years.
Boeing did not specify if the scratches on the wires were caused by a supplier or the company. The company said it has informed the Federal Aviation Administration and customers. The FAA could not immediately comment.
The planemaker said all in-service 737 MAX airplanes can continue to operate safely and that it did not expect the issue to affect the company’s goal to deliver about 500 737 jets this year.
The announcement comes after company said on Tuesday it had delivered 51 jets in February – the highest total for the month since 2018 and an increase from 46 in January. Deliveries in February included 43 737 MAX jets.

The post Boeing says wiring flaws may delay first quarter 737 MAX deliveries appeared first on azeritimes.com.


Categories
South Caucasus News

Starship delays threaten NASA’s moon landing timeline, watchdog warns


NASA last month added an extra Artemis test mission and acknowledged technical challenges its contractors face within the Artemis moon program, in which Elon Musk’s SpaceX will land humans on the moon across two missions beginning in 2028 followed by similar crewed landings by Jeff Bezos’ Blue Origin, News.az reports, citing BBC.
***
The agency kept 2028 as its target moon landing date for Starship. But the magnitude of both SpaceX and Blue Origin’s remaining development work on their moon landers jeopardizes the 2028 target.

The post Starship delays threaten NASA’s moon landing timeline, watchdog warns appeared first on azeritimes.com.


Categories
South Caucasus News

ChatGPT as a therapist? New study reveals serious ethical risks​


The study found that even when instructed to use established psychotherapy approaches, the systems consistently fail to meet professional ethics standards set by organizations such as the American Psychological Association, News.Az reports, citing foreign media.

***

Researchers from Brown University, working closely with mental health professionals, identified repeated patterns of problematic behavior. In testing, chatbots mishandled crisis situations, gave responses that reinforced harmful beliefs about users or others, and used language that created the appearance of empathy without genuine understanding.

“In this work, we present a practitioner-informed framework of 15 ethical risks to demonstrate how LLM counselors violate ethical standards in mental health practice by mapping the model’s behavior to specific ethical violations,” the researchers wrote in their study. “We call on future work to create ethical, educational and legal standards for LLM counselors — standards that are reflective of the quality and rigor of care required for human-facilitated psychotherapy.”

The findings were presented at the AAAI/ACM Conference on Artificial Intelligence, Ethics and Society. The research team is affiliated with Brown’s Center for Technological Responsibility, Reimagination and Redesign.

How Prompts Shape AI Therapy Responses

Zainab Iftikhar, a Ph.D. candidate in computer science at Brown who led the study, set out to examine whether carefully worded prompts could guide AI systems to behave more ethically in mental health settings. Prompts are written instructions designed to steer a model’s output without retraining it or adding new data.

“Prompts are instructions that are given to the model to guide its behavior for achieving a specific task,” Iftikhar said. “You don’t change the underlying model or provide new data, but the prompt helps guide the model’s output based on its pre-existing knowledge and learned patterns.

“For example, a user might prompt the model with: ‘Act as a cognitive behavioral therapist to help me reframe my thoughts,’ or ‘Use principles of dialectical behavior therapy to assist me in understanding and managing my emotions.’ While these models do not actually perform these therapeutic techniques like a human would, they rather use their learned patterns to generate responses that align with the concepts of CBT or DBT based on the input prompt provided.”

People regularly share these prompt strategies on platforms like TikTok, Instagram, and Reddit. Beyond individual experimentation, many consumer facing mental health chatbots are built by applying therapy related prompts to general purpose LLMs. That makes it especially important to understand whether prompting alone can make AI counseling safer.

Testing AI Chatbots in Simulated Counseling

To evaluate the systems, the researchers observed seven trained peer counselors who had experience with cognitive behavioral therapy. These counselors conducted self counseling sessions with AI models prompted to act as CBT therapists. The models tested included versions of OpenAI’s GPT Series, Anthropic‘s Claude, and Meta’s Llama.

The team then selected simulated chats based on real human counseling conversations. Three licensed clinical psychologists reviewed those transcripts to flag possible ethical violations.

The analysis uncovered 15 distinct risks grouped into five broad categories:

  • Lack of contextual adaptation: Overlooking a person’s unique background and offering generic advice.
  • Poor therapeutic collaboration: Steering the conversation too forcefully and at times reinforcing incorrect or harmful beliefs.
  • Deceptive empathy: Using phrases such as “I see you” or “I understand” to suggest emotional connection without true comprehension.
  • Unfair discrimination: Displaying bias related to gender, culture, or religion.
  • Lack of safety and crisis management: Refusing to address sensitive issues, failing to direct users to appropriate help, or responding inadequately to crises, including suicidal thoughts.

The Accountability Gap in AI Mental Health

Iftikhar noted that human therapists can also make mistakes. The key difference is oversight.

“For human therapists, there are governing boards and mechanisms for providers to be held professionally liable for mistreatment and malpractice,” Iftikhar said. “But when LLM counselors make these violations, there are no established regulatory frameworks.”

The researchers emphasize that their findings do not suggest AI has no place in mental health care. Tools powered by artificial intelligence could help expand access, particularly for people who face high costs or limited availability of licensed professionals. However, the study highlights the need for clear safeguards, responsible deployment, and stronger regulatory structures before relying on these systems in high stakes situations.
For now, Iftikhar hopes the work encourages caution.

“If you’re talking to a chatbot about mental health, these are some things that people should be looking out for,” she said.

Why Rigorous Evaluation Matters

Ellie Pavlick, a Brown computer science professor who was not involved in the research, said the study underscores the importance of carefully examining AI systems used in sensitive areas like mental health. Pavlick leads ARIA, a National Science Foundation AI research institute at Brown focused on building trustworthy AI assistants.

“The reality of AI today is that it’s far easier to build and deploy systems than to evaluate and understand them,” Pavlick said. “This paper required a team of clinical experts and a study that lasted for more than a year in order to demonstrate these risks. Most work in AI today is evaluated using automatic metrics which, by design, are static and lack a human in the loop.”

She added that the study could serve as a model for future research aimed at improving safety in AI mental health tools.

“There is a real opportunity for AI to play a role in combating the mental health crisis that our society is facing, but it’s of the utmost importance that we take the time to really critique and evaluate our systems every step of the way to avoid doing more harm than good,” Pavlick said. “This work offers a good example of what that can look like.”

The post ChatGPT as a therapist? New study reveals serious ethical risks​ appeared first on azeritimes.com.


Categories
South Caucasus News

Amazon launches healthcare AI assistant on its website, app​


The artificial intelligence assistant can explain results, connect patients with providers and answer questions about medications and symptoms, the company said in a release. The model,  announced in January, was previously exclusive to members of One Medical, the company’s clinical services provider, News.Az reports, citing CNBC.

***

Customers do not need to be members of Prime, the company’s premium prescription service, or One Medical to use the free assistant. 

“Health AI is designed to handle the logistical and informational work that creates friction in healthcare, so patients and providers can spend more time on what matters most,” said Andrew Diamond, chief medical officer at Amazon One Medical.

For non-emergency conditions ranging from acne and head lice to diabetes and sleep apnea, the agent can help manage symptoms, conduct virtual assessments, and provide treatment advice.

A spokesperson for the company said the assistant does not create treatment plans, and when patients require treatments or have complex conditions, they will be connected with a provider. Visits that require a provider cost $29 each for patients who are not One Medical members or not using an introductory offer through Prime.

Customers can give the agent permission to access medical data, including lab results, records and clinical notes. The assistant can also analyze healthcare purchases made on the website, such as vitamins or blood pressure monitors, in order to ask follow-up questions.

The post Amazon launches healthcare AI assistant on its website, app​ appeared first on azeritimes.com.


Categories
South Caucasus News

Attack hits Russian consulate in Iran’s Isfahan​


Foreign Ministry spokeswoman Maria Zakharova said in a commentary that the diplomatic mission was damaged “in an attack on the nearby provincial governor’s office” on Sunday, News.Az reports, citing foreign media.

***

“Windows in the office building and residential apartments were shattered, and several employees” felt impact of the blast wave, but there were no casualties, Zakharova said.

Russia, she added, considers attacks on diplomatic and consular missions to be a “flagrant violation” of international law.

“We demand that all parties strictly respect the inviolability of diplomatic facilities and refrain from attacks on the safety, life, and health of their personnel,” she added, calling on parties to the conflict to “immediately end the military confrontation and return to the negotiating table.”

Regional escalation has flared since Israel and the US launched a joint attack against Iran on Feb. 28. Over 1,200 people have since been killed and thousands of others injured.

Tehran has refused to surrender and retaliated with drone and missile strikes targeting Israel, Jordan, Iraq and Gulf countries that are home to US military assets.

The post Attack hits Russian consulate in Iran’s Isfahan​ appeared first on azeritimes.com.


Categories
South Caucasus News

Иран ударил по немецким военным



Categories
South Caucasus News

Armenian Genocide Memorial’s Director ‘Forced’ to Resign After Vance Visit – Asbarez


Armenian Genocide Memorial’s Director ‘Forced’ to Resign After Vance Visit  Asbarez

Categories
South Caucasus News

Raiders reunite trio of stars from Georgia National Championship team – Raiders Wire


Raiders reunite trio of stars from Georgia National Championship team  Raiders Wire

Categories
South Caucasus News

There is no easy exit to Trump’s war


There is no easy exit to Trump’s war

Subscribe to unlock this article

Try unlimited access

Only $1 for 4 weeks

Then $75 per month. Complete digital access to quality FT journalism on any device. Cancel anytime during your trial.

Explore our full range of subscriptions.

For individuals

Discover all the plans currently available in your country

For multiple readers

Digital access for organisations. Includes exclusive features and content.

Why the FT?

See why over a million readers pay to read the Financial Times.

Find out why

Read More

The post There is no easy exit to Trump’s war appeared first on azeritimes.com.


Categories
South Caucasus News

Американский солдат погиб на войне с Ираном