Contact Me By Email

Saturday, February 15, 2025

HOW THE CANON R7 II HAS THE POTENTIAL TO BE THE BEST APS-C CAMERA

The Canon R7 II Has The Potential To Be The Best APS-C Camera

The Canon R7 II has the potential to be a significant upgrade over its predecessor, featuring a stacked sensor for improved readout speed and low-light performance. Refinements in autofocus, build quality, and connectivity are also expected, including better weather resistance, customizable button functions, and content authenticity features. Overall, the Canon R7 II aims to be an affordable APS-C camera that meets the evolving needs of photographers.

Canon r7

Since the beginning of the megapixel war, an APS-C has been an afterthought to manufacturers. The sensor has never truly been respected, and often, it serves as a replica of an already existing high-end camera. It may feel like the device, after all, is kept this way so that people are forced to buy professional cameras. However, there are a few exceptions that restore our faith in manufacturers when it comes to this genre. An example is the amazing Canon R7, which was launched in 2022. Three years after various successors of the R6, its sibling, we believe that the Canon R7 II will be a long time coming.

Since there are high expectations from the Canon R7 II, here is a look at what the camera can do to match the standards set by its predecessor.

Sensor and Image Quality

We have received the Canon R7, and we absolutely adored it. For its time, the device had everything that one could need for bird, wildlife, and sports genres. It has a 32.5 MP APS-C CMOS sensor, which works perfectly in many scenarios. However, we do believe that it’s high time that the Canon R7 II launches with a stacked sensor, as it would help in improving the camera’s readout speed while also reducing the rolling shutter effect, the latter of which was not pronounced in our test but was visible. This would further help achieve higher frame rates while also allowing better low-light performance.

In addition, a higher resolution sensor would also be a great idea, with improved dynamic range and noise reduction at higher ISO settings, which would make it more promising.

Canon R7’s back panel.

Autofocus and Lenses

The Canon R7 features an impressive autofocus system similar to the EOS R3, and in many ways, it is at par with the EOS R6 and the R8. However, what we did notice was that the lenses were holding the camera back. So, we do hope that the company continues to create lenses that do justice to the camera, especially in challenging lighting conditions. At the same time, we would appreciate refinements when it comes to subject tracking and eye-detection AF. Since Sigma is now producing APS-C lenses and Tamron will follow suit, perhaps the wider range can help photographers gain better results.

Build and Ergonomics

The R7 has great weather-sealing and a strong body to withstand challenging environments. However, refinement, such as more weather resistance, can help it become more reliable. At the same time, the previous camera had challenges with the IBIS, which didn’t detect the tripod automatically, and ISO buttons, the latter of which was placed in an awkward place. Even minor issues such as this can hamper a photographer’s workflow. So, a slightly better design and placement of buttons could do wonders.

Connectivity And Functions

When Canon launched its 1.3.0 firmware, it left the R7 behind. As we reported in our review: “At this point, I think that Canon should let us wire whatever setting we want to whatever button we wish. Sony, Leica, Fujifilm, and other brands all do this. Many times, it might make me not want to pick up a Canon camera instead of another brand’s.” 

This is why the Canon R7 II must work on this and ensure that more updates allow people to rewire their cameras according to their demands. Furthermore, they should also have content authenticity for APS-C cameras, as it is absolutely needed in today’s AI-driven age. At the same time, the company needs to work on ways to

Overall, the Canon R7 II has the potential to be a significant upgrade over its predecessor, allowing users to have an APS-C camera that is not just affordable but is also learning no stone unturned to meet the evolving needs of photographers.

At the same time, we should remember that it’s an APS-C camera. And often, brands don’t really give those cameras any unique features.

Thursday, January 30, 2025

The Best Chromebooks in 2024

With Sweeping Executive Orders, Trump Tests Local Control of Schools

With Sweeping Executive Orders, Trump Tests Local Control of Schools

“President Trump issued executive orders aimed at reshaping public education, promoting “patriotic” education and restricting discussions about race and gender. While the orders echo conservative state laws, their impact is uncertain due to the decentralized nature of K-12 education funding and curriculum control. While some self-censorship may occur, historical precedents suggest that attempts to ban ideas from the classroom are rarely successful.

The orders seek to encourage “patriotic education” and restrict discussions about racism and gender by threatening to withdraw federal funding. But schools are often resistant to change.

A student walks down a school hallway.
President Trump’s orders seek to exert more federal control over schools. Philip Cheung for The New York Times

With a series of executive orders, President Trump has demonstrated that he has the appetite for an audacious fight to remake public education in the image of his “anti-woke,” populist political movement.

But in a country unique among nations for its hyperlocal control of schools, the effort is likely to run into legal, logistical and funding trouble as it tests the limits of federal power over K-12 education.

On Wednesday evening, Mr. Trump signed two executive orders. One was a 2,400-word behemoth focused mainly on race, gender and American history. It seeks to prevent schools from recognizing transgender identities or teaching about concepts such as structural racism, “white privilege” and “unconscious bias,” by threatening their federal funding.

The order also promotes “patriotic” education that depicts the American founding as “unifying, inspiring and ennobling” while explaining how the United States “has admirably grown closer to its noble principles throughout its history.”

The second order directs a swath of federal agencies to look for ways to expand access to private school vouchers.

Both orders echo energetic conservative lawmaking in the states. Over the past five years, the number of children using taxpayer dollars for private education or home-schooling costs has doubled, to one million. More than 20 states have restricted how race, gender and American history can be discussed in schools. States and school boards have banned thousands of books

It is not clear what real-world effect the new federal orders might have in places where shifts are not already underway. States and localities provide 90 percent of the funding for public education — and have the sole power to set curriculums, tests, teaching methods and school-choice policies.

The orders are likely to strain against the limits of the federal government’s role in K-12 education, a role that Mr. Trump has said should be reduced.

That paradox is a “confounding” one, said Derrell Bradford, president of 50CAN, a nonpartisan group that supports private school choice. He applauded the executive order on vouchers and said that taken together, the two orders mark a major moment in the centuries-old debate over what values the nation’s schools should impart.

“You can like it or not, but we’re not going to have values-neutral schools,” he said.

Still, there are many legal questions about the administration’s ability to restrict federal funding in order to pressure schools.

The major funding stream that supports public schools, known as Title I, goes out to states in a formula set by Congress, and the president has little power to restrict its flow.

“It seems like a significant part of the strategy is to set priorities through executive order and make the Congress or the Supreme Court respond — as they are supposed to in a system of checks and balances,” Mr. Bradford said.

The executive branch does control smaller tranches of discretionary funding, but they may not be enough to persuade school districts to change their practices.

In Los Angeles, Alberto Carvalho, superintendent of the nation’s second-largest school district, said last fall that regardless of who won the presidential election, his system would not change the way it handles gender identity.

Transgender students are allowed to play on sports teams and use bathrooms that align with their gender identities, policies the Trump order is trying to end.

On Wednesday, after it became clear that Mr. Trump would attempt to cut funding, a spokeswoman for the Los Angeles public school district released a more guarded statement, saying, “Our academic standards are aligned with all state and federal mandates and we remain committed to creating and maintaining a safe and inclusive learning environment for all students.”

One big limit to Mr. Trump’s agenda is that despite official federal, state and district policies, individual teachers have significant say over what gets taught and how.

Even in conservative regions of Republican-run states, efforts to control the curriculum have sometimes sputtered.

In Oklahoma, for example, where the state superintendent, Ryan Walters, is a Trump ally, some conservative educators have pushed back against efforts to insert the Bible into the curriculum.

Nationally, surveys of teachers show that the majority did not change their classroom materials or methods in response to conservative laws. Some educators have reported that they are able to subtly resist attempts to control how subjects like racism are talked about, for example, by teaching students about the debate for and against restrictive curriculum policies.

Florida has been, in many ways, an outlying case — and one that has served as a model for the Trump administration.

There, Gov. Ron DeSantis created powerful incentives for teachers to embrace priorities such as emphasizing the Christian beliefs of the founding fathers and restricting discussions of gender and racism.

Teachers could earn a $3,000 bonus for taking a training course on new civics learning standards. If their students performed poorly on a standardized test of the subject, their own evaluation ratings suffered.

On race and gender, the DeSantis restrictions were broad and vaguely written. Schools accused of breaking the laws could be sued for financial damages, and teachers were threatened with losing their professional licenses.

This led many schools and educators to interpret the laws broadly. Sometimes they interpreted them more broadly than intended, the DeSantis administration claimed. A ban on books with sexual content led one district to announce that “Romeo and Juliet” would be pulled from the curriculum.

A ban on recognizing transgender identities led to schools sending home nickname permission slips to parents, which were required even if a student named William wanted to be called Will.

Public school educators are often fearful of running into trouble with higher-level authorities. It is possible, and even likely, that Mr. Trump’s executive orders will lead to some measure of self-censorship.

Adam Laats, an education historian at Binghamton University, said one potential historical antecedent for Mr. Trump’s executive order was the Red Scare in the mid-20th century, during which many teachers accused of Communist sympathies lost their jobs or were taken to court.

“To my mind, this executive order is a blast of steam,” he said, “dangerous especially because it can encourage local aggressive activism.”

But, he noted, political attempts to ban ideas from the classroom have rarely been successful.“

Wednesday, January 29, 2025

iPad Pro 2024 Review: Why Not Make This a Mac?


iPad Pro 2024 Review: Why Not Make This a Mac?

Apple's highest-end iPad still feels ahead of where the Mac is in some ways and behind it in others. It makes me wonder, more than ever, why there's a line between iPads and Macs at all.





Saturday, January 25, 2025

When A.I. Passes This Test, Look Out

When A.I. Passes This Test, Look Out

‘’A new test called “Humanity’s Last Exam” is being released, designed to measure the capabilities of AI systems across a wide range of academic subjects. The test, created by AI safety researcher Dan Hendrycks, aims to determine if AI systems can surpass human experts in answering complex questions. While current AI models performed poorly on the exam, Hendrycks predicts their scores will rise rapidly, potentially surpassing 50% by the end of the year.

The creators of a new test called “Humanity’s Last Exam” argue we may soon lose the ability to create tests hard enough for A.I. models.

Rune Fisker

If you’re looking for a new reason to be nervous about artificial intelligence, try this: Some of the smartest humans in the world are struggling to create tests that A.I. systems can’t pass.

For years, A.I. systems were measured by giving new models a variety of standardized benchmark tests. Many of these tests consisted of challenging, S.A.T.-caliber problems in areas like math, science and logic. Comparing the models’ scores over time served as a rough measure of A.I. progress.

But A.I. systems eventually got too good at those tests, so new, harder tests were created — often with the types of questions graduate students might encounter on their exams.

Those tests aren’t in good shape, either. New models from companies like OpenAI, Google and Anthropic have been getting high scores on many Ph.D.-level challenges, limiting those tests’ usefulness and leading to a chilling question: Are A.I. systems getting too smart for us to measure?

This week, researchers at the Center for AI Safety and Scale AI are releasing a possible answer to that question: A new evaluation, called “Humanity’s Last Exam,” that they claim is the hardest test ever administered to A.I. systems. 

Humanity’s Last Exam is the brainchild of Dan Hendrycks, a well-known A.I. safety researcher and director of the Center for AI Safety. (The test’s original name, “Humanity’s Last Stand,” was discarded for being overly dramatic.)

Mr. Hendrycks worked with Scale AI, an A.I. company where he is an advisor, to compile the test, which consists of roughly 3,000 multiple-choice and short answer questions designed to test A.I. systems’ abilities in areas ranging from analytic philosophy to rocket engineering.

Questions were submitted by experts in these fields, including college professors and prizewinning mathematicians, who were asked to come up with extremely difficult questions they knew the answers to. 

Here, try your hand at a question about hummingbird anatomy from the test:

Hummingbirds within Apodiformes uniquely have a bilaterally paired oval bone, a sesamoid embedded in the caudolateral portion of the expanded, cruciate aponeurosis of insertion of m. depressor caudae. How many paired tendons are supported by this sesamoid bone? Answer with a number.

Or, if physics is more your speed, try this one: 

A block is placed on a horizontal rail, along which it can slide frictionlessly. It is attached to the end of a rigid, massless rod of length R. A mass is attached at the other end. Both objects have weight W. The system is initially stationary, with the mass directly above the block. The mass is given an infinitesimal push, parallel to the rail. Assume the system is designed so that the rod can rotate through a full 360 degrees without interruption. When the rod is horizontal, it carries tension T1. When the rod is vertical again, with the mass directly below the block, it carries tension T2. (Both these quantities could be negative, which would indicate that the rod is in compression.) What is the value of (T1−T2)/W?

(I would print the answers here, but that would spoil the test for any A.I. systems being trained on this column. Also, I’m far too dumb to verify the answers myself.)

A seated man in a gray shirt poses for a photo.
Humanity’s Last Exam is the brainchild of Dan Hendrycks, an A.I. safety researcher and director of the Center for AI Safety.Guerin Blask for The New York Times

The questions on Humanity’s Last Exam went through a two-step filtering process. First, submitted questions were given to leading A.I. models to solve.

If the models couldn’t answer them (or if, in the case of multiple-choice questions, the models did worse than by random guessing), the questions were given to a set of human reviewers, who refined them and verified the correct answers. Experts who wrote top-rated questions were paid between $500 and $5,000 per question, as well as receiving credit for contributing to the exam.

Kevin Zhou, a postdoctoral researcher in theoretical particle physics at the University of California, Berkeley, submitted a handful of questions to the test. Three of his questions were chosen, all of which he told me were “along the upper range of what one might see in a graduate exam.” 

Mr. Hendrycks, who helped create a widely used A.I. test known as Massive Multitask Language Understanding, or M.M.L.U., said he was inspired to create harder A.I. tests by a conversation with Elon Musk. (Mr. Hendrycks is also a safety advisor to Mr. Musk’s A.I. company, xAI.) Mr. Musk, he said, raised concerns about the existing tests given to A.I. models, which he thought were too easy.

“Elon looked at the M.M.L.U. questions and said, ‘These are undergrad level. I want things that a world-class expert could do,’” Mr. Hendrycks said.

There are other tests trying to measure advanced A.I. capabilities in certain domains, such as FrontierMath, a test developed by Epoch AI, and ARC-AGI, a test  developed by the A.I. researcher François Chollet. 

But Humanity’s Last Exam is aimed at determining how good A.I. systems are at answering complex questions across a wide variety of academic subjects, giving us what might be thought of as a general intelligence score.

“We are trying to estimate the extent to which A.I. can automate a lot of really difficult intellectual labor,” Mr. Hendrycks said.

Once the list of questions had been compiled, the researchers gave Humanity’s Last Exam to six leading A.I. models, including Google’s Gemini 1.5 Pro and Anthropic’s Claude 3.5 Sonnet. All of them failed miserably. OpenAI’s o1 system scored the highest of the bunch, with a score of 8.3 percent.

(The New York Times has sued OpenAI and its partner, Microsoft, accusing them of copyright infringement of news content related to A.I. systems. OpenAI and Microsoft have denied those claims.)

Mr. Hendrycks said he expected those scores to rise quickly, and potentially to surpass 50 percent by the end of the year. At that point, he said, A.I. systems might be considered “world-class oracles,” capable of answering questions on any topic more accurately than human experts. And we might have to look for other ways to measure A.I.’s impacts, like looking at economic data or judging whether it can make novel discoveries in areas like math and science.

“You can imagine a better version of this where we can give questions that we don’t know the answers to yet, and we’re able to verify if the model is able to help solve it for us,” said Summer Yue, Scale AI’s director of research and an organizer of the exam.

Part of what’s so confusing about A.I. progress these days is how jagged it is. We have A.I. models capable of diagnosing diseases more effectively than human doctorswinning silver medals at the International Math Olympiad and beating top human programmers on competitive coding challenges. 

But these same models sometimes struggle with basic tasks, like arithmetic or writing metered poetry. That has given them a reputation as astoundingly brilliant at some things and totally useless at others, and it has created vastly different impressions of how fast A.I. is improving, depending on whether you’re looking at the best or the worst outputs. 

That jaggedness has also made measuring these models hard. I wrote last year that we need better evaluations for A.I. systems. I still believe that. But I also believe that we need more creative methods of tracking A.I. progress that don’t rely on standardized tests, because most of what humans do — and what we fear A.I. will do better than us — can’t be captured on a written exam.

Mr. Zhou, the theoretical particle physics researcher who submitted questions to Humanity’s Last Exam, told me that while A.I. models were often impressive at answering complex questions, he didn’t consider them a threat to him and his colleagues, because their jobs involve much more than spitting out correct answers.

“There’s a big gulf between what it means to take an exam and what it means to be a practicing physicist and researcher,” he said. “Even an A.I. that can answer these questions might not be ready to help in research, which is inherently less structured.”