$ cat data.txt aaaaaa aaaaaa cccccc aaaaaa aaaaaa bbbbbb $ cat data.txt | uniq aaaaaa cccccc aaaaaa bbbbbb $ cat data.txt | sort | uniq aaaaaa bbbbbb cccccc $ The result that I need is to display all the lines from the original file removing all the duplicates (not just the consecutive ones), while maintaining … Read more
I have a large file in the following format:
Here are commands on a random file from pastebin:
If I have a string “1 2 3 2 1” – or an array [1,2,3,2,1] – how can I select the unique values, i.e.
I really enjoying using
control+r to recursively search my command history. I’ve found a few good options I like to use with it:
I was using the following command
Suppose you have a file like this:
I encountered this use case today. It seems simple at first glance, but fiddling around with
awk revealed that it’s nontrivial.