We show how Bayesian neural networks can be used for time-series analysis. We consider a block-based model building strategy to model linear and nonlinear features within the time series: a linear combination of a linear autoregression term and a feedforward neural network (FFNN) with an unknown number of hidden nodes. To allow for simpler models, we also consider these terms separately as competing models to select from. Model identifiability problems arise when FFNN sigmoidal activation functions exhibit almost linear behavior or when there are almost duplicate or irrelevant neural network nodes. New reversible-jump moves are proposed to facilitate model selection, mitigating model identifiability problems. We illustrate this methodology analyzing several time-series data examples.