Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

std: add streamUntilEof to io #22681

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

Warrper
Copy link

@Warrper Warrper commented Jan 30, 2025

It's currently quite inconvenient to read an entire file using the streamUntilDelimiter function, this one replicates most of the same functionality but will read until it hits EOF rather than a delimiter.

Copy link
Member

@ifreund ifreund left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be nice to have a unit test in lib/std/io/Reader/test.zig as well

) anyerror!void {
if (optional_max_size) |max_size| {
for (0..max_size) |_| {
const byte: u8 = try self.readByte();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should use read or readAll for efficiency, see the implementation of readAllArrayListAligned(). There is no need to check every byte as we are not searching for a delimiter here.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I couldn't really figure out a way to do this while still using the writer pattern. I'm not too sure how important that is?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, the more efficient approach requires a buffer. As mlugg points out in #22677 (comment), the core of this functionality is already provided by LinearFifo.pump(). The part that's missing from pump() is the ability to set a max size for robustness.

In any case, it would be good if streamUntilEof() did not require an allocator, I suppose we might need to make the fixed size of the buffer configurable though, I'm not sure what the ideal interface here is.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason why I wanted this functionality was basically the usecase of 'fill up an ArrayList with the contents of this file' I wanted heap alocated memory specifically because I don't want to have to care about how much space I need to store the file. I've only been using zig for a few weeks so I'm still a bit fresh so apologies if I don't make any sense.

Copy link
Member

@mlugg mlugg Jan 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason why I wanted this functionality was basically the usecase of 'fill up an ArrayList with the contents of this file'

This already exists as readAllArrayList (or, if you just want the whole file in a buffer and don't actually need to mutate the arraylist afterwards, readAllAlloc).

Copy link
Author

@Warrper Warrper Jan 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah right thanks for that, I got a bit interested in seeing if I could make this work with the fifo.pump that was mentioned. I've managed to pull some of the logic out of that to be able to have a max length while still using the same underlying stuff

In any case, it would be good if streamUntilEof() did not require an allocator, I suppose we might need to make the fixed size of the buffer configurable though, I'm not sure what the ideal interface here is.

@ifreund - I think I've pretty much managed what you're asking here:

f49ec0e

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants