Let be a binary code defined as
where is the trace map.
Let be the minimum distance of and be the number of codewords of weight .
Compute .
[THOUGHT] I will simulate the code for small odd values of () to find the pattern of the weight distribution, specifically . The code is defined by pairs . Codeword . Weight . I will compute the frequencies of each weight. Then I will extrapolate to .
Python approach:
- Implement
GF(2^n). - Generate all .
- Compute weights efficiently.
- Precompute trace table.
- Precompute table.
- Use Fast Walsh-Hadamard Transform (FWHT) if possible?
- The inner part is .
- Let .
- The sequence is .
- The weight of the sequence (over all ) is .
- We need to subtract the term: .
- So weight over is .
- Wait, if , then .
- This is the weight over all .
- The codeword excludes . The value at is .
- So the weight over is exactly the weight over all .
- Total weight .
- Algorithm:
- For each :
- Compute vector .
- Compute Walsh Transform of to get .
- For each :
- Calculate .
- Update counts.
- Sum counts over all .
- For each :
Complexity: iterations for . Inside, FWHT is . Total . For , , very fast. For , , might take 10-20 seconds. I'll stick to . [/THOUGHT] [PYTHON] import sys
def solve_weights(n): size = 1 << n # Generate field and trace # Primitive polynomial is not strictly needed for trace if we just want a field # But to be precise with x^6, we need a representation. # We can use a simple representation. # Construct GF(2^n) # For n=5, poly x^5 + x^2 + 1 # For n=7, x^7 + x + 1 # For n=9, x^9 + x^4 + 1 # For n=11, x^11 + x^2 + 1
if n == 5: poly = 0x25
elif n == 7: poly = 0x83
elif n == 9: poly = 0x211 # x^9 + x^4 + 1 = 512 + 16 + 1 = 529
elif n == 11: poly = 0x805
else: return {}
# Log/Exp tables
gf_exp = [0] * (2 * size)
gf_log = [0] * size
x = 1
for i in range(size - 1):
gf_exp[i] = x
gf_log[x] = i
x <<= 1
if x & size:
x ^= poly
for i in range(size - 1, 2 * size):
gf_exp[i] = gf_exp[i - (size - 1)]
def multiply(a, b):
if a == 0 or b == 0: return 0
return gf_exp[gf_log[a] + gf_log[b]]
def power(a, p):
if a == 0: return 0
return gf_exp[(gf_log[a] * p) % (size - 1)]
# Trace table
# Tr(x) = x + x^2 + x^4 + ... + x^(2^(n-1))
trace = [0] * size
for i in range(size):
t = i
for _ in range(n - 1):
t = multiply(t, t) # square
i ^= t
trace[i] = i & 1 # Should be 0 or 1
# Actually the loop calculates x^2... but we need sum.
# Let's redo trace properly
# Correct trace calculation
for i in range(size):
temp = i
s = i
for _ in range(n - 1):
temp = multiply(temp, temp) # x^(2^k)
s ^= temp
trace[i] = s
# Precompute x^6
pow6 = [0] * size
for i in range(size):
pow6[i] = power(i, 6)
# FWHT implementation
def fwht(a):
h = 1
while h < len(a):
for i in range(0, len(a), h * 2):
for j in range(i, i + h):
x = a[j]
y = a[j + h]
a[j] = x + y
a[j + h] = x - y
h *= 2
return a
weight_counts = {}
# Iterate v
for v in range(size):
tr_v = trace[v]
# Construct sequence for this v
# A[x] = (-1)^Tr(v x^6)
# We want Walsh transform at u: sum_x (-1)^(Tr(v x^6) + Tr(u x))
seq = [0] * size
for x in range(size):
val = trace[multiply(v, pow6[x])]
seq[x] = 1 if val == 0 else -1
# Transform
spectrum = list(seq)
fwht(spectrum)
# Calculate weights for all u
# w = Tr(u) + Tr(v) + 2^(n-1) - spectrum[u]/2
for u in range(size):
tr_u = trace[u]
s_u = spectrum[u]
w = tr_u + tr_v + (size // 2) - (s_u // 2)
weight_counts[w] = weight_counts.get(w, 0) + 1
return weight_counts
results = {} for n in [5, 7, 9]: print(f"Simulating n={n}...") counts = solve_weights(n) # Find d sorted_weights = sorted([k for k in counts.keys() if k > 0]) d = sorted_weights[0] print(f"n={n}, d={d}") print(f"Counts: A_d={counts.get(d,0)}, A_d+1={counts.get(d+1,0)}, A_d+2={counts.get(d+2,0)}") results[n] = (d, counts.get(d,0), counts.get(d+1,0), counts.get(d+2,0))
print("Results summary:", results) [/PYTHON]
[THOUGHT] The problem asks for for . Let's analyze the results for .
For : . Sum = .
For : . Sum = .
For : . Sum = .
Let's look at the minimum distance . . . . Pattern for : . For , .
Now let's look at the sum . . . .
Let's check if there is a formula involving powers of 2. . . . . .
Let's try to fit a polynomial or exponential form in . Let . . . . . . .
Let's look at or similar. . . .
Let's look at the structure of the code. The code is related to the Gold code or Kasami code families, or specifically codes from quadratic forms. The function is . The exponent . In general . Here is not quite matching standard forms directly for odd . However, for odd , if . . So . So is a permutation? No, . is a permutation if . Since is odd, is not divisible by 3. So is a permutation. Thus behaves like . Wait, if , then as ranges over , ranges over . Then where . So the set of functions is . This is the dual of the double-error-correcting BCH code (if is odd). The weight distribution of the dual of the double-error-correcting BCH code (which is this code) is well known. The code length is usually, but here we have the extended code (plus and components). Actually, the definition is: . Length is . Wait, the problem says , so positions. Plus and . Total length .
Let's check the sums again. . . .
Let's check against . . . . . . .
Let's look at the specific values . For : . Weights are likely around . Weights observed: 12, 13, 14. Actually, for BCH duals, weights are usually . For : . But we have extra bits . This modifies the weights by adding . So weights could be (from 12), (from 16), (from 20). The minimum distance is . So we are looking for the sum of counts for weights . These all come from the "base" weight in the cyclic part, modified by the two extra bits. The base weight corresponds to the low weight codewords in the cyclic code. In the standard dual BCH code (length ), the weight distribution is: (count 1) (count ) (count ) (count ) where , . Wait, the counts sum to . Actually, for the code , the frequencies are: Weight : Frequency ? Let's verify with . Weights: , , . Frequencies: ? No, must be integer. Standard frequencies for Gold/Kasami-like sequences (odd ): ? No. For dual of double-error correcting BCH (odd ): Weights are , , . Frequencies: . ? No, that's complex. Let's use the known frequencies for Gold codes (which have the same distribution). . Weight (for even ?). No, is odd. Weight . Frequency . Wait, let's check . . . Weight . Frequency . . Weight . Frequency . Total = . Correct.
Now, our code adds and . The codewords are parameterized by . The "core" weight comes from . This core code is exactly the dual BCH code (since is equivalent to which generates the same code). So the core weights take values (for ). The full weight is . We want the sum of counts for weights . For , . The weights contributing to must come from core weight . Why? Core weights are . If core is 0: total weight is . Too small. If core is 12: total weight . These are exactly . If core is 16: total weight . Too large. If core is 20: total weight . Too large. So is exactly the number of codewords with core weight . This is equal to .
Let's verify this hypothesis. Hypothesis: . For : . Matches simulation (). For : . Matches simulation (). For : . Matches simulation ().
The pattern is confirmed. We need to calculate this value for . Formula: . . . . . Result = .
Calculation: Sum = Subtract : .
Let's double check the formula derivation. The code is the dual of the primitive BCH code with designed distance 5 (since and are checked). For odd , the weight distribution is known (Kasami/Gold). Weights are . Let . Weights is the minimum non-zero weight. Let . The frequency of this weight is . Wait, my formula used which is . . . . Correct. . . . Correct. . . . Correct.
So for : . Factor 1: . Factor 2: . Product: . . . . Product = . .
Let's verify the multiplication. 131071 x 32896
131071 * 6 = 786426 131071 * 90 = 11796390 131071 * 800 = 104856800 131071 * 2000 = 262142000 131071 * 30000 = 3932130000 Sum: 786426 11796390 104856800 262142000 3932130000
Summing last digits: 6 Tens: 2+9 = 11 (1) Hundreds: 4+3+8 = 15 + 1 = 16 (6) Thousands: 6+6+6+2 = 20 + 1 = 21 (1) Ten-thousands: 8+9+5+4+3 = 29 + 2 = 31 (1) Hundred-thousands: 7+1+8+1+1 = 18 + 3 = 21 (7) -> Wait.
Let's use Python for the multiplication to be sure. Also, confirm the logic about being exactly the count of the core weight. The core weights are separated by at least . For , gap is . The "jitter" from is at most . vs (gap 4 for ). For , . Next core weight is . . . So indeed, the weights can ONLY come from the minimal core weight. And since and range over all 4 combinations for each core codeword? Wait. The core codeword depends on . The count is the number of pairs such that . For each such pair, the full weight is . Does take specific values for these pairs? Or does the set of pairs with minimal weight include all variations? Actually, the formula counts the number of pairs giving that core weight. For a fixed core sequence , the full codeword is . The weight is . The sum we want is . This is simply the number of pairs that produce the minimum core weight. Which is exactly the frequency we calculated.
Is it possible that for a specific yielding minimal core weight, is fixed? The core weight depends on . Specifically, it depends on the properties of the polynomial . If we scale by a factor? No, it's linear. However, the frequency formula derived from the cyclic code properties sums over all valid polynomials. The set of polynomials is . This is a vector space of dimension . The weight distribution counts how many vectors in this space have a certain weight. So yes, the sum is exactly the number of vectors in the core code with weight . Because for any such vector, the extra bits add 0, 1, or 2, mapping it to . And no vector from a higher weight level (min ) can fall into this range. So the answer is indeed .
[/THOUGHT] \boxed{4311711616}