Jump to content

Talk:Coding theory

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Hamming Distance figure is surely wrong?

[edit]

For starters, the description states we are looking at the Hamming distance of x,y at each pixel with coordinate (x,y) as represented by 16 colors. This implies the max Hamming distance is 16 and x and y are each represented with 16 bits. So why is the figure wider than it is long? It should be a perfect square. Furthermore I'd expect to see a nice thick diagonal of a single color along x=y (since x^x = 0). No such diagonal line stretches all the way from one corner to another, and the thick diagonal lines that are present do not start at any of the corners of the figure. Finally I would expect the plot to change abruptly at every halfway point (as x or y goes from 0111... to 1000...). While such regions exist, they are not spaced at the halfway point nor for that matter do they have any corners aligned with an edge of the figure.

Final nail in the coffin, the figure copyright info tells me this is someones own work and wasn't sourced from anywhere that can be verified or reviewed. I move the figure needs to be recalculated or removed. — Preceding unsigned comment added by 137.111.13.126 (talk) 10:21, 11 October 2019 (UTC)[reply]

Untitled

[edit]

Coding Theory generally refers to the two subjects Claude Shannon treated, Source (Compression) and Channel (error correcton). Cryptography is a different subject all together.

.

[edit]

I'm a bit of a neophyte here, but would it be proper to mention the sub-categories of coding theory as:

1) Compression coding 2) Coding for secrecy (cryptography) 3) error correction/detection 4) any others?

It sorta does, but its not explicit

Are compression and crypto part of coding theory? Maybe they are, but I think of coding theory as dealing with communications channels. Compression and crypto are independent of communications. Mirror Vax 08:27, 7 May 2005 (UTC)[reply]
Cryptography (encryption or authentication) is fundamentally linked to communications in that there is always a sender and a receiver (even if the receiver is a forgetful sender in the future) - that the medium can be a hard drive or RAM (or similar) shouldn't be relevant. Authentication (signing) is even an obvious stronger form of error detection. 86.180.15.157 (talk) 10:17, 12 July 2010 (UTC) someone else[reply]
Getting back to the original suggestion, crypto would surely be a subset of channel coding since it's focused on the properties of the channel (lack of secrecy/too much mutability) rather than the properties of the original data. But I concede that you could argue that properties of the metadata (ie, sensitivity or potential risk) are also critical which would make it akin to (yet not a part of) source coding. 86.180.15.157 (talk) 10:56, 12 July 2010 (UTC)[reply]

Merge with "code" ?

[edit]

Suggest this article should be merged with code. Both cover much the same territory, and both could use some beefing up.

This article goes into rather more detail, but that may be better devolved down to data compression and error correction.

-- Jheald 21:21, 5 March 2007 (UTC).[reply]

On second thoughts, maybe not. But then there needs to be some refactoring and clearer definition between the two. Jheald 22:39, 6 March 2007 (UTC)[reply]

Expand section tag

[edit]

I have tagged the "source coding" section as could do with some expansion.

Source coding now redirects to data compression. Which I think is probably correct for the term overall; but requires that we need to make sure that material about the code-level aspects of source coding is still highlighted; and that this article takes up some of the slack too.

Relevant material that should be pointed to might include variable-length codes, prefix codes, Kraft inequality, Shannon's source coding theorem, ... more? -- Jheald 22:39, 6 March 2007 (UTC)[reply]

Expression: "minimize the entropy" right?

[edit]

I wonder whether this is the right expression:

Quote: "Data compression which explicitly tries to minimise the entropy of messages according to a particular probability model is called entropy encoding."

Isn't "entropy" the amount of disorder or randomness in information? So once compressed, the disorder will rather be maximized and not minimized. AAAAAA has little disorder, is thus little compressed and, to my opinion, exhibits a low entropy.

-- User:haensel 11:57, 9 May 2008 (UTC)[reply]

I think surely "maximise the entropy" was intended. In this context it's easier to take take "entropy" as meaning "density of interesting data" rather than "disorder" (although strictly speaking from the receiving end it is disorder). Although it might be that it's talking about reducing "redundant entropy", which would strictly be an abuse of the term. 86.180.15.157 (talk) 10:24, 12 July 2010 (UTC)[reply]

Citation not needed

[edit]

There are a lot of "citation needed" tags on this webpage. I don't really see the need for all of them. Mathematical facts are usually not given any citation on wikipedia as anyone can check by reading other mathematical articles. This is a sensible policy since they are not likely to be controversial.

Some of the claims with the "citation needed" tag definitely fall in this category. Like the claim that no code can do better than the entropy of the source. Anyone who has taken a introductory information theory course will likely have seen the proof of that claim. —Preceding unsigned comment added by 90.229.231.115 (talk) 20:41, 6 August 2008 (UTC)[reply]

I agree a number of these are uncontroversial and unnecessary, and I will remove them. If there is some controversy about Reed-Solomon codes being used for audio CDs for example, please discuss here. Relaxing (talk) 17:01, 11 February 2009 (UTC)[reply]

Reference

[edit]

Removing my addition in references "F. J. MacWilliams, N. J. A. Sloane, The Theory of Error-Correcting Codes" is bad idea since 5 years ago I work in Codding Theory and know, that this is fundamental book. Now I am not so interesting in Codding Theory, and so I not plan to add this reference in correct standard. I note this, if somebody want to do this.Vavlap (talk) 10:52, 23 July 2008 (UTC)[reply]

The problem isn't with the book, or with how you wrote it to the references section. The only problem is only that you did not *also* use the book in the article. For instance:
In particular, no source coding scheme can be better than the entropy of the source.
currently needs a citation to a reliable source, such as MacWilliams and Sloane. Find out where that book says this is true, and add the page number. If it was on page 50, it would be fine to say:
In particular, no source coding scheme can be better than the entropy of the source (MacWilliams & Sloane, p. 50).
Someone else will add the fancy wiki syntax. JackSchmidt (talk) 14:24, 23 July 2008 (UTC)[reply]

Keep detailed content on other pages?

[edit]

I don't think this article should repeat the content from other pages, such as FEC coding from Forward error correction and (summarized) in Error detection and correction. Rather than duplicate (triplicate?) the efforts, all with differing content and quality, maybe the article would present more quality by merely consisting of the heading, and refer the reader to the relevant pages (e.g., Forward error correction) for the details? Thoughts? Nageh (talk) 17:21, 9 October 2009 (UTC)[reply]

Oh, but it could definitely cover essentials such as the Noisy channel coding theorem! Nageh (talk) 17:30, 9 October 2009 (UTC)[reply]

I agree to both comments above. In the literature, coding theory is mainly concerned with transmitting data across noisy channels and recovering the message, i.e. error detection and correction. Thus, on my opinion the article should start with a header with links to pages that discuss various uses of codes in subjects like data compression, cryptography, network coding and subsequently focus on a few important concepts from the field of error-correcting codes (such as theory including the Noisy channel coding theorem and topics such as algebraic coding theory and capacity-approaching codes (LDPC codes, Turbo codes), including links to the main article on each of these subjects. Isheden (talk) 16:12, 9 July 2010 (UTC)[reply]

To allow this article to evolve, I think we need a common sense on what should be covered here. One common definition of (algebraic) coding theory is given in MathWorld, Weisstein, Eric W. "Coding Theory." From MathWorld--A Wolfram Web Resource. Based on this definition, in my opinion the source coding topics should be split to source coding or merged with data compression and the various topics starting from "Other applications of coding theory" should be merged with the article code or the disambiguation page coding. What do you think of this proposal? Isheden (talk) 10:10, 31 May 2011 (UTC)[reply]

Neural coding

[edit]

The section on "Neural coding" was removed with the comment that neural coding is "protocol coding" and not source or channel coding. I beg to differ. See for example the papers Information theory and neural coding, Neural coding and decoding: communication channels and quantization, A simple coding procedure enhances a neuron's information capacity, Information-theoretic analysis of neural coding. Bethnim (talk) 22:06, 6 April 2010 (UTC)[reply]

Your one added half-sentence was crucial in confirming the connection to coding theory. Your previous paragraph did not mention a word about it, hence my previous revert was justified. Anyway, in case you know more about it, it would be nice if you could elaborate, e.g., that they could identify the concept of minimum-distance coding, aso. The second ref in your list above seems like a good addition. -Nageh (talk) 08:51, 7 April 2010 (UTC)[reply]

Ad: Proposed merge of section Linear block codes to article Block codes

[edit]

I suggest that the section is merged into Forward error correction rather than an article on Block codes. The latter would seem like an appropriate redirect to Forward error correction instead. However, as this article and Forward error correction currently stand, they both need a complete rewrite anyway. Nageh (talk) 15:05, 27 July 2010 (UTC)[reply]

Oh, alright, there is the Block code article, which already contains exactly what is proposed for merging. I think it is safe to remove the text here, but on the other hand, as mentioned above, the article needs rewriting anyway. Nageh (talk) 15:13, 27 July 2010 (UTC)[reply]

Is EFM a line code, a channel code, or both?

[edit]

Some people say that eight-to-fourteen modulation is a kind of "line code" -- such as the Wikipedia article line code and template {{bit-encoding}}.

Other people say that eight-to-fourteen modulation is a kind of "channel code" -- such as the Wikipedia article eight-to-fourteen modulation (and one of its references[1] ).

So is "line code" synonymous with "channel code"? If not, what exactly is the difference between them, and how is it possible that eight-to-fourteen modulation can be both at the same time? --DavidCary (talk) 16:29, 12 May 2017 (UTC)[reply]

References

Undefined symbol

[edit]

In the section Source coding there is a definition that begins as follows:

"Data can be seen as a random variable , where appears with probability ."

But the symbol is never defined.

This is an encyclopedia. We cannot expect readers to telepathically know what editors are thinking.

All symbols need to be defined.