How to remove rows with any zero value
Asked Answered
A

13

39

I have a problem to solve how to remove rows with a Zero value in R. In others hand, I can use na.omit() to delete all the NA values or use complete.cases() to delete rows that contains NA values.

Is there anyone know how to remove rows with a Zero Values in R?

For example :

Before

|    DateTime      | Mac1  | Mac2  | Mac3  | Mac4  |
----------------------------------------------------
| 2011-04-02 06:00 | 20    | 0     | 20    | 20    |  
| 2011-04-02 06:05 | 21    | 21    | 21    | 21    |  
| 2011-04-02 06:10 | 22    | 22    | 22    | 22    |  
| 2011-04-02 06:15 | 23    | 23    | 0     | 23    |  
| 2011-04-02 06:20 | 24    | 24    | 24    | 24    | 
| 2011-04-02 06:25 | 0     | 25    | 25    | 0     | 

After

|    DateTime      | Mac1  | Mac2  | Mac3  | Mac4  |
----------------------------------------------------
| 2011-04-02 06:05 | 21    | 21    | 21    | 21    |  
| 2011-04-02 06:10 | 22    | 22    | 22    | 22    |  
| 2011-04-02 06:20 | 24    | 24    | 24    | 24    |  
Aguilar answered 2/4, 2012 at 13:48 Comment(4)
One route: reduce this to a problem you've already solved by replacing the zeros with NAs.Hun
Thanks Joran, for your reply.. But, i dont understand, what mean of replacing the zeros with NAs? Because before i get the table i have deleted NAs values before. But there is still 0's values. Could you tell me how to do it?Aguilar
Ok i think i must use this code to replace 0 with NAs.. data[which(data==0)] = NAAguilar
@Aguilar I think instead you need data[which(data==0)] = NA_character_Strata
S
53

There are a few different ways of doing this. I prefer using apply, since it's easily extendable:

##Generate some data
dd = data.frame(a = 1:4, b= 1:0, c=0:3)

##Go through each row and determine if a value is zero
row_sub = apply(dd, 1, function(row) all(row !=0 ))
##Subset as usual
dd[row_sub,]
Serpentiform answered 2/4, 2012 at 14:1 Comment(0)
P
14

You can use filter from dplyr package.

Let's call your data frame df

library(dplyr) df1 <- filter(df, Mac1 > 0, Mac2 > 0, Mac3 > 0, Mac4 > 0)

df1 will have only rows with entries above zero. Hope this helps.

Patronage answered 10/10, 2018 at 10:47 Comment(0)
R
12

I would do the following.

Set the zero to NA.

 data[data==0] <- NA
 data

Delete the rows associated with NA.

 data2<-data[complete.cases(data),]
Ruder answered 14/5, 2018 at 20:50 Comment(0)
N
8

I would probably go with Joran's suggestion of replacing 0's with NAs and then using the built in functions you mentioned. If you can't/don't want to do that, one approach is to use any() to find rows that contain 0's and subset those out:

set.seed(42)
#Fake data
x <- data.frame(a = sample(0:2, 5, TRUE), b = sample(0:2, 5, TRUE))
> x
  a b
1 2 1
2 2 2
3 0 0
4 2 1
5 1 2
#Subset out any rows with a 0 in them
#Note the negation with ! around the apply function
x[!(apply(x, 1, function(y) any(y == 0))),]
  a b
1 2 1
2 2 2
4 2 1
5 1 2

To implement Joran's method, something like this should get you started:

x[x==0] <- NA
Nata answered 2/4, 2012 at 14:5 Comment(1)
thanks anyway, but i have done it with csgillespie solution.. ;)Aguilar
V
7

Well, you could swap your 0's for NA and then use one of those solutions, but for sake of a difference, you could notice that a number will only have a finite logarithm if it is greater than 0, so that rowSums of the log will only be finite if there are no zeros in a row.

dfr[is.finite(rowSums(log(dfr[-1]))),]
Volcanology answered 2/4, 2012 at 14:1 Comment(1)
+1 for clever, but I would certainly prefer an all/any solution in practice ...Spevek
E
6

Using tidyverse/dplyr, you can also remove rows with any zero value in a subset of variables:

# variables starting with Mac must be non-zero
filter_at(df, vars(starts_with("Mac")), all_vars((.) != 0))

# variables x, y, and z must be non-zero
filter_at(df, vars(x, y, z), all_vars((.) != 0))

# all numeric variables must be non-zero
filter_if(df, is.numeric, all_vars((.) != 0))
Eastman answered 13/12, 2019 at 23:44 Comment(0)
D
5

I prefer a simple adaptation of csgillespie's method, foregoing the need of a function definition:

d[apply(d!=0, 1, all),]

where d is your data frame.

Dairymaid answered 20/12, 2017 at 20:42 Comment(1)
This does not work for me, but this works: d[!apply(d==0, 1, all),]Triaxial
T
3

As dplyr 1.0.0 deprecated the scoped variants which @Feng Mai nicely showed, here is an update with the new syntax. This might be useful because in this case, across() doesn't work, and it took me some time to figure out the solution as follows.

The goal was to extract all rows that contain at least one 0 in a column.

df %>% 
  rowwise() %>% 
  filter(any(c_across(everything(.)) == 0))

# A tibble: 4 x 3
# Rowwise: 
      a     b     c
  <dbl> <dbl> <dbl>
1     1     1     0
2     2     0     1
3     4     0     3
4     0     0     0

with the data

df <- data.frame(a = 1:4, b= 1:0, c=0:3)
df <- rbind(df, c(0,0,0))
df <- rbind(df, c(9,9,9))

So it correctly doesn't return the last row containing all 9s.

Timmytimocracy answered 8/1, 2021 at 9:35 Comment(0)
F
2

i have tried many ways but in my experience, i choose to set the zero to NA then remove the rows associated with NA by using complete.cases.

dt[dt==0] <- NA
dt_<-dt[complete.cases(dt),]
Firewater answered 24/5, 2022 at 17:24 Comment(0)
Z
1

In base R, we can select the columns which we want to test using grep, compare the data with 0, use rowSums to select rows which has all non-zero values.

cols <- grep("^Mac", names(df))
df[rowSums(df[cols] != 0) == length(cols), ]

#          DateTime Mac1 Mac2 Mac3 Mac4
#1 2011-04-02 06:05   21   21   21   21
#2 2011-04-02 06:10   22   22   22   22
#3 2011-04-02 06:20   24   24   24   24

Doing this with inverted logic but giving the same output

df[rowSums(df[cols] == 0) == 0, ]

In dplyr, we can use filter_at to test for specific columns and use all_vars to select rows where all the values are not equal to 0.

library(dplyr)
df %>%  filter_at(vars(starts_with("Mac")), all_vars(. != 0))

data

df <- structure(list(DateTime = structure(1:6, .Label = c("2011-04-02 06:00", 
"2011-04-02 06:05", "2011-04-02 06:10", "2011-04-02 06:15", "2011-04-02 06:20", 
"2011-04-02 06:25"), class = "factor"), Mac1 = c(20L, 21L, 22L, 
23L, 24L, 0L), Mac2 = c(0L, 21L, 22L, 23L, 24L, 25L), Mac3 = c(20L, 
21L, 22L, 0L, 24L, 25L), Mac4 = c(20L, 21L, 22L, 23L, 24L, 0L
)), class = "data.frame", row.names = c(NA, -6L))
Zoochore answered 13/9, 2019 at 3:53 Comment(0)
W
1

Here is a more up-to-date approach:

library(dplyr)

df %>%
  filter(!if_any(starts_with("Mac"), ~ . == 0))

          DateTime Mac1 Mac2 Mac3 Mac4
1 2011-04-02 06:05   21   21   21   21
2 2011-04-02 06:10   22   22   22   22
3 2011-04-02 06:20   24   24   24   24
Weirick answered 1/9, 2021 at 10:41 Comment(0)
M
1

The shortest answer possible would benefit from implicit coercion of numeric to logical, with which zeroes are converted to FALSE and other values to TRUE, and using if_any/if_all

library(dplyr)

df %>% filter(if_all(starts_with('Mac')))
Manteltree answered 16/12, 2021 at 13:28 Comment(1)
Long time no see my friend :)Weirick
W
0

After breaking my head on a similar problem, I came up with this:

I wanted to leave the zeros in column 1 and 9 because they made sense there but leave the zeros out from the rest of the columns. This code would produce bigger than zero values in column 2 till 8 in the data frame, leaving column 1 and 9 intact.

dataset %>% filter(data.frame(dataset[,2:8]>0))
Witkin answered 11/12, 2021 at 22:52 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.