Claude 3.7 Sonnet [Public Preview] #152291
Replies: 55 comments 137 replies
-
It's out already: https://www.anthropic.com/news/claude-3-7-sonnet I just tested on Cursor, it's much better. |
Beta Was this translation helpful? Give feedback.
-
When will it be available on GitHub Copilot |
Beta Was this translation helpful? Give feedback.
-
Cursor, Cody, Roo code have announced support for Sonnet 3.7. Windsurf is expected to support it soon. Hope GitHub copilot supports it as well. |
Beta Was this translation helpful? Give feedback.
-
But GitHub Copilot is slow in taking updates.. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
@matthewisabel I was able to output 6000 tokens and it wasn't even the limit. 🥇 Edit: well, that didn't last long, see below. |
Beta Was this translation helpful? Give feedback.
-
Does sonnet 3.7 (and think-mode) have the quota of using? Is it the same unlimited as 3.5? |
Beta Was this translation helpful? Give feedback.
-
Are the thinking/reasoning tokens hidden for the thinking model? I can't seem to find it |
Beta Was this translation helpful? Give feedback.
-
Using Claude Sonnet 3.7, I can barely use it properly. I keep getting rate limited, and now I’m blocked for 2 hours, even on the paid version |
Beta Was this translation helpful? Give feedback.
-
When will vision be incorporated into the model? |
Beta Was this translation helpful? Give feedback.
-
Hi! I am getting 500 Internal Server Error and sometimes code 400. I got this issue using Copilot Edits, Cline and Roo Code from VS Code and VS Code Insiders |
Beta Was this translation helpful? Give feedback.
-
Is Claude 3.7 Sonnet available in agent mode? I can only see it in 'edit' mode. |
Beta Was this translation helpful? Give feedback.
-
Please add Claude 3.7 Sonnet back to Agent mode in VS Code |
Beta Was this translation helpful? Give feedback.
-
![]() 🥹 loving the new model |
Beta Was this translation helpful? Give feedback.
-
getting a lot of: |
Beta Was this translation helpful? Give feedback.
-
Getting rate limit even on first message. Currently sonnet 3.7 is unusable with github copilot. |
Beta Was this translation helpful? Give feedback.
-
"Sorry, the response hit the length limit. Please rephrase your prompt." It's useless |
Beta Was this translation helpful? Give feedback.
-
Claude 3.7 sonent is actually unusable for me, im getting rate limited even on first message |
Beta Was this translation helpful? Give feedback.
-
The new model it awesome! Is there a chance Grok3 thinking or o3high will make it too? |
Beta Was this translation helpful? Give feedback.
-
Anytime I try to use the model with the #codebase flag I get this error |
Beta Was this translation helpful? Give feedback.
-
Now that we have several reasoning models available like o1, o3-mini, and Claude 3.7 Sonnet Thinking, is there any plan for showing the thinking tokens in chat? The thinking tokens are very useful as a debug mechanism for prompts. |
Beta Was this translation helpful? Give feedback.
-
I'm using claude code with aws bedrock. Is anyone getting ratelimiting errors just after the 1st message ?
|
Beta Was this translation helpful? Give feedback.
-
Hard pass for now 5min usage and locked me out for 3+ hours never seen that long wait time. it seems better but i don't want to read a book when i refactor code, basically every iteration in agents it spits out more useless repetitive comments than actual code. |
Beta Was this translation helpful? Give feedback.
-
![]() My company is using copilot business plan, 3.7 is enabled by admin, but nobody from the team can see the new model in the dropdown menu(JetBrains plugin). what can be the issue? |
Beta Was this translation helpful? Give feedback.
-
Works good but dang you hit the limit really fast, when i say fast i used the think mode to refactor a single file around 700 lines, asked to add another thing, then i asked to check a folder to see why an endpoint was not protected and 3 hours of rate-limit, not even o3 have these hard hitting limits. Back to 3.5 i guess... |
Beta Was this translation helpful? Give feedback.
-
i hit the limit on every request now |
Beta Was this translation helpful? Give feedback.
-
I don't know about you guys... but the same thing happened to me with sonnet 3.7 than it did with o3mini... First day was working marvels.. everything on the first try ... since then, it degraded massively also to me, it happens many times, that the model just deletes a huge chunk of my code - just deletes it ... not sure if this happens after it loses context or on the first try (cannot seem to find the way in vscode to find the history of the interactions in edits tab) - or it loses context after going back and forth.... usually my flow is I give it prompt and feed the whole codebase for context [which is a small nuxt project - only shadcn components are many] - and maybe going back and forth ... only after 2-3 sub-prompts... I need to feed the codebase again, because it loses it quick.... so my thoughts - THIS SHOULD NEVER HAPPEN - that the model suggest deletes of the code that are very important (even if not) and is unwarranted .. given other task... second... I get the models have limited context window.... but we need to find a better solution to keep whole codebase in context - at least a way to automatically identify the most important/pertinent part of it... |
Beta Was this translation helpful? Give feedback.
-
I've a question about Claude Sonnet 3.7 input tokens. If I recall correctly, with 3.5 we have 200k tokens per input (context window). What about 3.7? |
Beta Was this translation helpful? Give feedback.
-
Claude 3.7 Sonnet is now available in Copilot
See the changelog for Claude 3.7 Sonnet
The new Claude 3.7 Sonnet is now available in Pro, Business, and Enterprise plans across Copilot Chat across Visual Studio Code, Copilot Chat on GitHub.com, and Visual Studio. The model will also be available soon in Jetbrains IDEs.
We'd love to hear your suggestions for improvements and any issues we might see as you start using the new model!
_ 🟢 2025-02-25, 20:45:00 UTC After intermittent instability, we have now rolled out access to all paid Copilot plans._
Beta Was this translation helpful? Give feedback.
All reactions