| Age | Commit message (Collapse) | Author |
|
|
|
Also, breaking API changes to:
* std.fs.Dir.readFileAlloc
* std.fs.Dir.readFileAllocOptions
|
|
and delete deprecated alias std.io
|
|
Progress towards #24732
|
|
toss is only valid following a peek.
|
|
|
|
|
|
|
|
it's not quite finished because I need to make it not copy the Resource
|
|
Closes #24686
As a bonus, this commit also makes the `git.zig` "testing `main`" compile again.
|
|
|
|
Thanks Ian Johnson for finding these
|
|
|
|
sorry, zip file creation has regressed because std lib no longer has a
deflate compression implementation
|
|
|
|
fsync blocks until the contents have been actually written to disk,
which would be useful if we didn't want to report success until having
achieved durability. But the OS will ensure coherency; i.e. if one
process writes stuff without calling fsync, then another process reads
that stuff, the writes will be seen even if they didn't get flushed to
disk yet.
Since this code deals with ephemeral cache data, it's not worth trying
to achieve this kind of durability guarantee. This is consistent with
all the other tooling on the system.
Certainly, if we wanted to change our stance on this, it would not be
something that affects only the git fetching logic.
|
|
This passes tests but it doesn't provide as big a window size as is
required to decompress larger streams.
The next commit in this branch will work towards that, without
introducing an additional buffer.
|
|
Alignment and fill options only apply to numbers.
Rework the implementation to mainly branch on the format string rather
than the type information. This is more straightforward to maintain and
more straightforward for comptime evaluation.
Enums support being printed as decimal, hexadecimal, octal, and binary.
`formatInteger` is another possible format method that is
unconditionally called when the value type is struct and one of the
integer-printing format specifiers are used.
|
|
Introduces `std.fmt.alt` which is a helper for calling alternate format
methods besides one named "format".
|
|
|
|
|
|
added adapter to AnyWriter and GenericWriter to help bridge the gap
between old and new API
make std.testing.expectFmt work at compile-time
std.fmt no longer has a dependency on std.unicode. Formatted printing
was never properly unicode-aware. Now it no longer pretends to be.
Breakage/deprecations:
* std.fs.File.reader -> std.fs.File.deprecatedReader
* std.fs.File.writer -> std.fs.File.deprecatedWriter
* std.io.GenericReader -> std.io.Reader
* std.io.GenericWriter -> std.io.Writer
* std.io.AnyReader -> std.io.Reader
* std.io.AnyWriter -> std.io.Writer
* std.fmt.format -> std.fmt.deprecatedFormat
* std.fmt.fmtSliceEscapeLower -> std.ascii.hexEscape
* std.fmt.fmtSliceEscapeUpper -> std.ascii.hexEscape
* std.fmt.fmtSliceHexLower -> {x}
* std.fmt.fmtSliceHexUpper -> {X}
* std.fmt.fmtIntSizeDec -> {B}
* std.fmt.fmtIntSizeBin -> {Bi}
* std.fmt.fmtDuration -> {D}
* std.fmt.fmtDurationSigned -> {D}
* {} -> {f} when there is a format method
* format method signature
- anytype -> *std.io.Writer
- inferred error set -> error{WriteFailed}
- options -> (deleted)
* std.fmt.Formatted
- now takes context type explicitly
- no fmt string
|
|
|
|
Also use an optional as the return type instead of an error code.
|
|
|
|
Closes #21888
|
|
It was added to the standard library in #18733.
|
|
Closes #21976
|
|
|
|
This allows `std.Uri.resolve_inplace` to properly preserve the fact
that `new` is already escaped but `base` may not be. I originally tried
just moving `raw_uri` around, but it made uri resolution unmanagably
complicated, so I instead added per-component information to `Uri` which
allows extra allocations to be avoided when constructing uris with
components from different sources, and in some cases, deferring the work
all the way to when the uri is printed, where an allocator may not even
be needed.
Closes #19587
|
|
|
|
Prepare test cases, store them in Fetch/testdata.
They cover changes in this PR as well from previous one #19111.
|
|
On case insensitive file systems, don't overwrite files with same name
in different casing. Add diagnostic error so caller could decide what to do.
|
|
My first zlib implementation broke git fetch because it introduce
[lookahead](https://github.com/ziglang/zig/issues/18967). That resulted
in workarounds [1](https://github.com/ziglang/zig/commit/80f3ef6e14a1213d1c3b31d515870afb43cc9379)
[2](https://github.com/ziglang/zig/commit/d00faa2407cdeaa058da62f2d95f64f9e7ed6a09)
After [fixing](https://github.com/ziglang/zig/pull/19163) lookahead in
zlib decompressor this fixes are no longer necessary.
|
|
I mistakenly thought this was dead code in an earlier commit in this
branch. This commit restores the proper behavior.
|
|
I originally removed these in 402f967ed5339fa3d828b7fe1d57cdb5bf38dbf2.
I allowed them to be added back in #15299 because they were smuggled in
alongside a bug fix, however, I wasn't kidding when I said that I wanted
to take the design of std.http in a different direction than using this
data structure.
Instead, some headers are provided via explicit field names populated
while parsing the HTTP request/response, and some are provided via
new fields that support passing extra, arbitrary headers.
This resulted in simplification of logic in many places, as well as
elimination of the possibility of failure in many places. There is
less deinitialization code happening now.
Furthermore, it made it no longer necessary to clone the headers data
structure in order to handle redirects.
http_proxy and https_proxy fields are now pointers since it is common
for them to be unpopulated.
loadDefaultProxies is changed into initDefaultProxies to communicate
that it does not actually load anything from disk or from the network.
The function now is leaky; the API user must pass an already
instantiated arena allocator. Removes the need to deinitialize proxies.
Before, proxies stored arbitrary sets of headers. Now they only store
the authorization value.
Removed the duplicated code between https_proxy and http_proxy. Finally,
parsing failures of the environment variables result in errors being
emitted rather than silently ignoring the proxy.
error.CompressionNotSupported is renamed to
error.CompressionUnsupported, matching the naming convention from all
the other errors in the same set.
Removed documentation comments that were redundant with field and type
names.
Disabling zstd decompression in the server for now; see #18937.
I found some apparently dead code in src/Package/Fetch/git.zig. I want
to check with Ian about this.
I discovered that test/standalone/http.zig is dead code, it is only
being compiled but not being run. Furthermore it hangs at the end if you
run it manually. The previous commits in this branch were written under
the assumption that this test was being run with
`zig build test-standalone`.
|
|
The buffer for HTTP headers is now always provided via a static buffer.
As a consequence, OutOfMemory is no longer a member of the read() error
set, and the API and implementation of Client and Server are simplified.
error.HttpHeadersExceededSizeLimit is renamed to
error.HttpHeadersOversize.
|
|
Review: https://github.com/ziglang/zig/pull/19032#pullrequestreview-1896251841
|
|
|
|
This commit works around #18967 by adding an `AccumulatingReader`, which
accumulates data read from the underlying packfile, and by keeping track
of the position in the packfile and hash/checksum information separately
rather than using reader composition. That is, the packfile position and
hashes/checksums are updated with the accumulated read history data only
after we can determine what data has actually been used by the
decompressor rather than merely being buffered.
The only addition to the standard library APIs to support this change is
the `unreadBytes` function in `std.compress.flate.Inflate`, which allows
the user to determine how many bytes have been read only for buffering
and not used as part of compressed data.
These changes can be reverted if #18967 is resolved with a decompressor
that reads precisely only the number of bytes needed for decompression.
|
|
Zig deflate compression/decompression implementation. It supports compression and decompression of gzip, zlib and raw deflate format.
Fixes #18062.
This PR replaces current compress/gzip and compress/zlib packages. Deflate package is renamed to flate. Flate is common name for deflate/inflate where deflate is compression and inflate decompression.
There are breaking change. Methods signatures are changed because of removal of the allocator, and I also unified API for all three namespaces (flate, gzip, zlib).
Currently I put old packages under v1 namespace they are still available as compress/v1/gzip, compress/v1/zlib, compress/v1/deflate. Idea is to give users of the current API little time to postpone analyzing what they had to change. Although that rises question when it is safe to remove that v1 namespace.
Here is current API in the compress package:
```Zig
// deflate
fn compressor(allocator, writer, options) !Compressor(@TypeOf(writer))
fn Compressor(comptime WriterType) type
fn decompressor(allocator, reader, null) !Decompressor(@TypeOf(reader))
fn Decompressor(comptime ReaderType: type) type
// gzip
fn compress(allocator, writer, options) !Compress(@TypeOf(writer))
fn Compress(comptime WriterType: type) type
fn decompress(allocator, reader) !Decompress(@TypeOf(reader))
fn Decompress(comptime ReaderType: type) type
// zlib
fn compressStream(allocator, writer, options) !CompressStream(@TypeOf(writer))
fn CompressStream(comptime WriterType: type) type
fn decompressStream(allocator, reader) !DecompressStream(@TypeOf(reader))
fn DecompressStream(comptime ReaderType: type) type
// xz
fn decompress(allocator: Allocator, reader: anytype) !Decompress(@TypeOf(reader))
fn Decompress(comptime ReaderType: type) type
// lzma
fn decompress(allocator, reader) !Decompress(@TypeOf(reader))
fn Decompress(comptime ReaderType: type) type
// lzma2
fn decompress(allocator, reader, writer !void
// zstandard:
fn DecompressStream(ReaderType, options) type
fn decompressStream(allocator, reader) DecompressStream(@TypeOf(reader), .{})
struct decompress
```
The proposed naming convention:
- Compressor/Decompressor for functions which return type, like Reader/Writer/GeneralPurposeAllocator
- compressor/compressor for functions which are initializers for that type, like reader/writer/allocator
- compress/decompress for one shot operations, accepts reader/writer pair, like read/write/alloc
```Zig
/// Compress from reader and write compressed data to the writer.
fn compress(reader: anytype, writer: anytype, options: Options) !void
/// Create Compressor which outputs the writer.
fn compressor(writer: anytype, options: Options) !Compressor(@TypeOf(writer))
/// Compressor type
fn Compressor(comptime WriterType: type) type
/// Decompress from reader and write plain data to the writer.
fn decompress(reader: anytype, writer: anytype) !void
/// Create Decompressor which reads from reader.
fn decompressor(reader: anytype) Decompressor(@TypeOf(reader)
/// Decompressor type
fn Decompressor(comptime ReaderType: type) type
```
Comparing this implementation with the one we currently have in Zig's standard library (std).
Std is roughly 1.2-1.4 times slower in decompression, and 1.1-1.2 times slower in compression. Compressed sizes are pretty much same in both cases.
More resutls in [this](https://github.com/ianic/flate) repo.
This library uses static allocations for all structures, doesn't require allocator. That makes sense especially for deflate where all structures, internal buffers are allocated to the full size. Little less for inflate where we std version uses less memory by not preallocating to theoretical max size array which are usually not fully used.
For deflate this library allocates 395K while std 779K.
For inflate this library allocates 74.5K while std around 36K.
Inflate difference is because we here use 64K history instead of 32K in std.
If merged existing usage of compress gzip/zlib/deflate need some changes. Here is example with necessary changes in comments:
```Zig
const std = @import("std");
// To get this file:
// wget -nc -O war_and_peace.txt https://www.gutenberg.org/ebooks/2600.txt.utf-8
const data = @embedFile("war_and_peace.txt");
pub fn main() !void {
var gpa = std.heap.GeneralPurposeAllocator(.{}){};
defer std.debug.assert(gpa.deinit() == .ok);
const allocator = gpa.allocator();
try oldDeflate(allocator);
try new(std.compress.flate, allocator);
try oldZlib(allocator);
try new(std.compress.zlib, allocator);
try oldGzip(allocator);
try new(std.compress.gzip, allocator);
}
pub fn new(comptime pkg: type, allocator: std.mem.Allocator) !void {
var buf = std.ArrayList(u8).init(allocator);
defer buf.deinit();
// Compressor
var cmp = try pkg.compressor(buf.writer(), .{});
_ = try cmp.write(data);
try cmp.finish();
var fbs = std.io.fixedBufferStream(buf.items);
// Decompressor
var dcp = pkg.decompressor(fbs.reader());
const plain = try dcp.reader().readAllAlloc(allocator, std.math.maxInt(usize));
defer allocator.free(plain);
try std.testing.expectEqualSlices(u8, data, plain);
}
pub fn oldDeflate(allocator: std.mem.Allocator) !void {
const deflate = std.compress.v1.deflate;
// Compressor
var buf = std.ArrayList(u8).init(allocator);
defer buf.deinit();
// Remove allocator
// Rename deflate -> flate
var cmp = try deflate.compressor(allocator, buf.writer(), .{});
_ = try cmp.write(data);
try cmp.close(); // Rename to finish
cmp.deinit(); // Remove
// Decompressor
var fbs = std.io.fixedBufferStream(buf.items);
// Remove allocator and last param
// Rename deflate -> flate
// Remove try
var dcp = try deflate.decompressor(allocator, fbs.reader(), null);
defer dcp.deinit(); // Remove
const plain = try dcp.reader().readAllAlloc(allocator, std.math.maxInt(usize));
defer allocator.free(plain);
try std.testing.expectEqualSlices(u8, data, plain);
}
pub fn oldZlib(allocator: std.mem.Allocator) !void {
const zlib = std.compress.v1.zlib;
var buf = std.ArrayList(u8).init(allocator);
defer buf.deinit();
// Compressor
// Rename compressStream => compressor
// Remove allocator
var cmp = try zlib.compressStream(allocator, buf.writer(), .{});
_ = try cmp.write(data);
try cmp.finish();
cmp.deinit(); // Remove
var fbs = std.io.fixedBufferStream(buf.items);
// Decompressor
// decompressStream => decompressor
// Remove allocator
// Remove try
var dcp = try zlib.decompressStream(allocator, fbs.reader());
defer dcp.deinit(); // Remove
const plain = try dcp.reader().readAllAlloc(allocator, std.math.maxInt(usize));
defer allocator.free(plain);
try std.testing.expectEqualSlices(u8, data, plain);
}
pub fn oldGzip(allocator: std.mem.Allocator) !void {
const gzip = std.compress.v1.gzip;
var buf = std.ArrayList(u8).init(allocator);
defer buf.deinit();
// Compressor
// Rename compress => compressor
// Remove allocator
var cmp = try gzip.compress(allocator, buf.writer(), .{});
_ = try cmp.write(data);
try cmp.close(); // Rename to finisho
cmp.deinit(); // Remove
var fbs = std.io.fixedBufferStream(buf.items);
// Decompressor
// Rename decompress => decompressor
// Remove allocator
// Remove try
var dcp = try gzip.decompress(allocator, fbs.reader());
defer dcp.deinit(); // Remove
const plain = try dcp.reader().readAllAlloc(allocator, std.math.maxInt(usize));
defer allocator.free(plain);
try std.testing.expectEqualSlices(u8, data, plain);
}
```
|
|
|
|
|
|
Addresses a comment in #17779 pointing out the inability to fetch the
upstream BoringSSL sources over Git. The reason for this is because the
Git server used in this case did not include the optional (but
recommended) LF terminator for textual pkt-line data. This commit
adjusts handling of textual pkt-line data so that it works both with and
without the optional trailing LF.
|
|
Let's take this breaking change opportunity to fix the style of this
enum.
|
|
Use inline to vastly simplify the exposed API. This allows a
comptime-known endian parameter to be propogated, making extra functions
for a specific endianness completely unnecessary.
|
|
|
|
|