Search results

Jump to: navigation, search
  • uniq | sed 'N;/^\(.*\)\n\1$/!P;D'
    11 KB (1,730 words) - 00:56, 18 May 2015
  • $ cat unsorted_list_with_dupes.txt | sort | uniq | pr | lpr
    20 KB (3,106 words) - 23:38, 10 September 2021
  • * delete duplicate, consecutive lines from a file (emulates "uniq"). First line in a set of duplicate lines is kept, rest are deleted. * delete all lines except duplicate lines (emulates "uniq -d").
    30 KB (4,929 words) - 22:47, 21 August 2019
  • ; Operating on sorted files : <code>sort uniq comm ptx tsort</code> * [[Uniq (command)|uniq]] &mdash; report or filter out repeated lines in a file
    9 KB (1,324 words) - 23:35, 6 February 2013
  • rpm -qa --queryformat='%{NAME} %{ARCH}\n' | sort | uniq > pkgs.txt
    8 KB (1,064 words) - 22:36, 19 April 2023
  • ...var/log/httpd/error_log |cut -d']' -f 4-99 |sed -e "s/, referer.*//g"|sort|uniq ...var/log/httpd/error_log |cut -d']' -f 4-99 |sed -e "s/, referer.*//g"|sort|uniq
    3 KB (400 words) - 21:59, 17 May 2015
  • awk '{if ($5 != -1) { print;}}' demo.txt | cut -f1-5 | sort -k1 -r -n | uniq | wc -l
    2 KB (288 words) - 17:25, 9 January 2010
  • .... Next it maps all uppercase characters to lower case, and finally it runs uniq with the -d option to print out only the words that were repeated. | uniq -d
    11 KB (1,812 words) - 07:26, 1 July 2020
  • $ time for i in $(seq 1 10000); do curl -s 10.101.133.100:8080; done | sort | uniq -c $ time for i in $(seq 1 10000); do curl -s 10.101.133.100:8080; done | sort | uniq -c
    168 KB (22,699 words) - 17:26, 19 January 2024